Sample records for automatic road extraction

  1. Automatic extraction of road features in urban environments using dense ALS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  2. Road Network Extraction from Dsm by Mathematical Morphology and Reasoning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Jianliang; Zhu, Lin; Tachibana, Kikuo

    2016-06-01

    The objective of this research is the automatic extraction of the road network in a scene of the urban area from a high resolution digital surface model (DSM). Automatic road extraction and modeling from remote sensed data has been studied for more than one decade. The methods vary greatly due to the differences of data types, regions, resolutions et al. An advanced automatic road network extraction scheme is proposed to address the issues of tedium steps on segmentation, recognition and grouping. It is on the basis of a geometric road model which describes a multiple-level structure. The 0-dimension element is intersection. The 1-dimension elements are central line and side. The 2-dimension element is plane, which is generated from the 1-dimension elements. The key feature of the presented approach is the cross validation for the three road elements which goes through the entire procedure of their extraction. The advantage of our model and method is that linear elements of the road can be derived directly, without any complex, non-robust connection hypothesis. An example of Japanese scene is presented to display the procedure and the performance of the approach.

  3. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  4. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  5. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  6. A research of road centerline extraction algorithm from high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  7. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.

    PubMed

    Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero

    2008-09-01

    Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  8. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    PubMed

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  9. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    PubMed Central

    Yang, Wei

    2018-01-01

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792

  10. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  11. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  12. Automated road network extraction from high spatial resolution multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Qiaoping

    For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.

  13. Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Shen, Yuzhong

    2011-03-01

    3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.

  14. FEX: A Knowledge-Based System For Planimetric Feature Extraction

    NASA Astrophysics Data System (ADS)

    Zelek, John S.

    1988-10-01

    Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.

  15. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  16. Delineation and geometric modeling of road networks

    NASA Astrophysics Data System (ADS)

    Poullis, Charalambos; You, Suya

    In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.

  17. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  18. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  19. Real-time road detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Andre, Haritini E.; McCoy, Keith

    1990-09-01

    Automatic road detection is an important part in many scene recognition applications. The extraction of roads provides a means of navigation and position update for remotely piloted vehicles or autonomous vehicles. Roads supply strong contextual information which can be used to improve the performance of automatic target recognition (ATh) systems by directing the search for targets and adjusting target classification confidences. This paper will describe algorithmic techniques for labeling roads in high-resolution infrared imagery. In addition, realtime implementation of this structural approach using a processor array based on the Martin Marietta Geometric Arithmetic Parallel Processor (GAPPTh) chip will be addressed. The algorithm described is based on the hypothesis that a road consists of pairs of line segments separated by a distance "d" with opposite gradient directions (antiparallel). The general nature of the algorithm, in addition to its parallel implementation in a single instruction, multiple data (SIMD) machine, are improvements to existing work. The algorithm seeks to identify line segments meeting the road hypothesis in a manner that performs well, even when the side of the road is fragmented due to occlusion or intersections. The use of geometrical relationships between line segments is a powerful yet flexible method of road classification which is independent of orientation. In addition, this approach can be used to nominate other types of objects with minor parametric changes.

  20. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    NASA Astrophysics Data System (ADS)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  1. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  2. Semantic Labelling of Road Furniture in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2017-09-01

    Road furniture semantic labelling is vital for large scale mapping and autonomous driving systems. Much research has been investigated on road furniture interpretation in both 2D images and 3D point clouds. Precise interpretation of road furniture in mobile laser scanning data still remains unexplored. In this paper, a novel method is proposed to interpret road furniture based on their logical relations and functionalities. Our work represents the most detailed interpretation of road furniture in mobile laser scanning data. 93.3 % of poles are correctly extracted and all of them are correctly recognised. 94.3 % of street light heads are detected and 76.9 % of them are correctly identified. Despite errors arising from the recognition of other components, our framework provides a promising solution to automatically map road furniture at a detailed level in urban environments.

  3. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    NASA Astrophysics Data System (ADS)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD), differs from conventional measures and is created to account for both changes of spectral direction and spectral magnitude in a unified fashion. The ATD measure is particularly suitable for differentiating urban targets such as roads and building rooftops. The curvilinear image provides estimates of the width and orientation of potential road segments. Road vectors derived from OpenStreetMap are then conflated to image road features by applying junction matching and intermediate point matching, followed by refinement with mean-shift clustering and morphological processing to produce a road mask with piecewise width estimates. The proposed approach is tested on a set of challenging, large, and diverse image data sets and the performance accuracy is assessed. The method is effective for road detection and width estimation of roads, even in challenging scenarios when extensive occlusion occurs.

  4. Automatic Detection and Classification of Audio Events for Road Surveillance Applications.

    PubMed

    Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine

    2018-06-06

    This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  5. Automatic Road Gap Detection Using Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.

    2011-09-01

    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  6. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  7. Rapid Extraction of Landslide and Spatial Distribution Analysis after Jiuzhaigou Ms7.0 Earthquake Based on Uav Images

    NASA Astrophysics Data System (ADS)

    Jiao, Q. S.; Luo, Y.; Shen, W. H.; Li, Q.; Wang, X.

    2018-04-01

    Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV) and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds) algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.

  8. Experience of the ARGO autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra

    1998-07-01

    This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.

  9. Research of infrared laser based pavement imaging and crack detection

    NASA Astrophysics Data System (ADS)

    Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang

    2013-08-01

    Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.

  10. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  11. Statistical classification of road pavements using near field vehicle rolling noise measurements.

    PubMed

    Paulo, Joel Preto; Coelho, J L Bento; Figueiredo, Mário A T

    2010-10-01

    Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.

  12. A semi-automatic method for extracting thin line structures in images as rooted tree network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less

  13. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  14. Application of Mls Data to the Assessment of Safety-Related Features in the Surrounding Area of Automatically Detected Pedestrian Crossings

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Sánchez-Rodríguez, A.; González-deSantos, L. M.

    2018-05-01

    During the last few years, there has been a huge methodological development regarding the automatic processing of 3D point cloud data acquired by both terrestrial and aerial mobile mapping systems, motivated by the improvement of surveying technologies and hardware performance. This paper presents a methodology that, in a first place, extracts geometric and semantic information regarding the road markings within the surveyed area from Mobile Laser Scanning (MLS) data, and then employs it to isolate street areas where pedestrian crossings are found and, therefore, pedestrians are more likely to cross the road. Then, different safety-related features can be extracted in order to offer information about the adequacy of the pedestrian crossing regarding its safety, which can be displayed in a Geographical Information System (GIS) layer. These features are defined in four different processing modules: Accessibility analysis, traffic lights classification, traffic signs classification, and visibility analysis. The validation of the proposed methodology has been carried out in two different cities in the northwest of Spain, obtaining both quantitative and qualitative results for pedestrian crossing classification and for each processing module of the safety assessment on pedestrian crossing environments.

  15. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  16. 3D road marking reconstruction from street-level calibrated stereo pairs

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier

    This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.

  17. Study on road sign recognition in LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2016-02-01

    Road and traffic sign identification is a field of study that can be used to aid the development of in-car advisory systems. It uses computer vision and artificial intelligence to extract the road signs from outdoor images acquired by a camera in uncontrolled lighting conditions where they may be occluded by other objects, or may suffer from problems such as color fading, disorientation, variations in shape and size, etc. An automatic means of identifying traffic signs, in these conditions, can make a significant contribution to develop an Intelligent Transport Systems (ITS) that continuously monitors the driver, the vehicle, and the road. Road and traffic signs are characterized by a number of features which make them recognizable from the environment. Road signs are located in standard positions and have standard shapes, standard colors, and known pictograms. These characteristics make them suitable for image identification. Traffic sign identification covers two problems: traffic sign detection and traffic sign recognition. Traffic sign detection is meant for the accurate localization of traffic signs in the image space, while traffic sign recognition handles the labeling of such detections into specific traffic sign types or subcategories [1].

  18. Chain-Wise Generalization of Road Networks Using Model Selection

    NASA Astrophysics Data System (ADS)

    Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.

    2017-05-01

    Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  19. Interactive Cadastral Boundary Delineation from Uav Data

    NASA Astrophysics Data System (ADS)

    Crommelinck, S.; Höfle, B.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.

    2018-05-01

    Unmanned aerial vehicles (UAV) are evolving as an alternative tool to acquire land tenure data. UAVs can capture geospatial data at high quality and resolution in a cost-effective, transparent and flexible manner, from which visible land parcel boundaries, i.e., cadastral boundaries are delineable. This delineation is to no extent automated, even though physical objects automatically retrievable through image analysis methods mark a large portion of cadastral boundaries. This study proposes (i) a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, and (ii) a procedure for a subsequent interactive delineation. Part (i) consists of two state-of-the-art computer vision methods, namely gPb contour detection and SLIC superpixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. Part (ii) allows a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach is tested on visible road outlines in two UAV datasets from Germany. Results show that all roads can be delineated comprehensively. Compared to manual delineation, the number of clicks per 100 m is reduced by up to 86 %, while obtaining a similar localization quality. The approach shows promising results to reduce the effort of manual delineation that is currently employed for indirect (cadastral) surveying.

  20. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    NASA Astrophysics Data System (ADS)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  1. Visual traffic jam analysis based on trajectory data.

    PubMed

    Wang, Zuchao; Lu, Min; Yuan, Xiaoru; Zhang, Junping; van de Wetering, Huub

    2013-12-01

    In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.

  2. Exploiting automatically generated databases of traffic signs and road markings for contextual co-occurrence analysis

    NASA Astrophysics Data System (ADS)

    Hazelhoff, Lykele; Creusen, Ivo M.; Woudsma, Thomas; de With, Peter H. N.

    2015-11-01

    Combined databases of road markings and traffic signs provide a complete and full description of the present traffic legislation and instructions. Such databases contribute to efficient signage maintenance, improve navigation, and benefit autonomous driving vehicles. A system is presented for the automated creation of such combined databases, which additionally investigates the benefit of this combination for automated contextual placement analysis. This analysis involves verification of the co-occurrence of traffic signs and road markings to retrieve a list of potentially incorrectly signaled (and thus potentially unsafe) road situations. This co-occurrence verification is specifically explored for both pedestrian crossings and yield situations. Evaluations on 420 km of road have shown that individual detection of traffic signs and road markings denoting these road situations can be performed with accuracies of 98% and 85%, respectively. Combining both approaches shows that over 95% of the pedestrian crossings and give-way situations can be identified. An exploration toward additional co-occurrence analysis of signs and markings shows that inconsistently signaled situations can successfully be extracted, such that specific safety actions can be directed toward cases lacking signs or markings, while most consistently signaled situations can be omitted from this analysis.

  3. A Hessian-based methodology for automatic surface crack detection and classification from pavement images

    NASA Astrophysics Data System (ADS)

    Ghanta, Sindhu; Shahini Shamsabadi, Salar; Dy, Jennifer; Wang, Ming; Birken, Ralf

    2015-04-01

    Around 3,000,000 million vehicle miles are annually traveled utilizing the US transportation systems alone. In addition to the road traffic safety, maintaining the road infrastructure in a sound condition promotes a more productive and competitive economy. Due to the significant amounts of financial and human resources required to detect surface cracks by visual inspection, detection of these surface defects are often delayed resulting in deferred maintenance operations. This paper introduces an automatic system for acquisition, detection, classification, and evaluation of pavement surface cracks by unsupervised analysis of images collected from a camera mounted on the rear of a moving vehicle. A Hessian-based multi-scale filter has been utilized to detect ridges in these images at various scales. Post-processing on the extracted features has been implemented to produce statistics of length, width, and area covered by cracks, which are crucial for roadway agencies to assess pavement quality. This process has been realized on three sets of roads with different pavement conditions in the city of Brockton, MA. A ground truth dataset labeled manually is made available to evaluate this algorithm and results rendered more than 90% segmentation accuracy demonstrating the feasibility of employing this approach at a larger scale.

  4. Remote sensing-based detection and quantification of roadway debris following natural disasters

    NASA Astrophysics Data System (ADS)

    Axel, Colin; van Aardt, Jan A. N.; Aros-Vera, Felipe; Holguín-Veras, José

    2016-05-01

    Rapid knowledge of road network conditions is vital to formulate an efficient emergency response plan following any major disaster. Fallen buildings, immobile vehicles, and other forms of debris often render roads impassable to responders. The status of roadways is generally determined through time and resource heavy methods, such as field surveys and manual interpretation of remotely sensed imagery. Airborne lidar systems provide an alternative, cost-effective option for performing network assessments. The 3D data can be collected quickly over a wide area and provide valuable insight about the geometry and structure of the scene. This paper presents a method for automatically detecting and characterizing debris in roadways using airborne lidar data. Points falling within the road extent are extracted from the point cloud and clustered into individual objects using region growing. Objects are classified as debris or non-debris using surface properties and contextual cues. Debris piles are reconstructed as surfaces using alpha shapes, from which an estimate of debris volume can be computed. Results using real lidar data collected after a natural disaster are presented. Initial results indicate that accurate debris maps can be automatically generated using the proposed method. These debris maps would be an invaluable asset to disaster management and emergency response teams attempting to reach survivors despite a crippled transportation network.

  5. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  6. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  7. Accuracy assessment of airborne LIDAR data and automated extraction of features

    NASA Astrophysics Data System (ADS)

    Cetin, Ali Fuat

    Airborne LIDAR technology is becoming more widely used since it provides fast and dense irregularly spaced 3D point clouds. The coordinates produced as a result of calibration of the system are used for surface modeling and information extraction. In this research a new idea of LIDAR detectable targets is introduced. In the second part of this research, a new technique to delineate the edge of road pavements automatically using only LIDAR is presented. The accuracy of LIDAR data should be determined before exploitation for any information extraction to support a Geographic Information System (GIS) database. Until recently there was no definitive research to provide a methodology for common and practical assessment of both horizontal and vertical accuracy of LIDAR data for end users. The idea used in this research was to use targets of such a size and design so that the position of each target can be determined using the Least Squares Image Matching Technique. The technique used in this research can provide end users and data providers an easy way to evaluate the quality of the product, especially when there are accessible hard surfaces to install the targets. The results of the technique are determined to be in a reasonable range when the point spacing of the data is sufficient. To delineate the edge of pavements, trees and buildings are removed from the point cloud, and the road surfaces are segmented from the remaining terrain data. This is accomplished using the homogeneous nature of road surfaces in intensity and height. There are not many studies to delineate the edge of road pavement after the road surfaces are extracted. In this research, template matching techniques are used with criteria computed by Gray Level Co-occurrence Matrix (GLCM) properties, in order to locate seed pixels in the image. The seed pixels are then used for placement of the matched templates along the road. The accuracy of the delineated edge of pavement is determined by comparing the coordinates of reference points collected via photogrammetry with the coordinates of the nearest points along the delineated edge.

  8. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-03-12

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed.

  9. Development of Mobile Mapping System for 3D Road Asset Inventory

    PubMed Central

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  10. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  11. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  12. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  13. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    NASA Astrophysics Data System (ADS)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  14. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.

  15. Hierarchical clustering of EMD based interest points for road sign detection

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza

    2014-04-01

    This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.

  16. a Framework of Change Detection Based on Combined Morphologica Features and Multi-Index Classification

    NASA Astrophysics Data System (ADS)

    Li, S.; Zhang, S.; Yang, D.

    2017-09-01

    Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  17. Piezoelectric energy harvesting computer controlled test bench

    NASA Astrophysics Data System (ADS)

    Vázquez-Rodriguez, M.; Jiménez, F. J.; de Frutos, J.; Alonso, D.

    2016-09-01

    In this paper a new computer controlled (C.C.) laboratory test bench is presented. The patented test bench is made up of a C.C. road traffic simulator, C.C. electronic hardware involved in automating measurements, and test bench control software interface programmed in LabVIEW™. Our research is focused on characterizing electronic energy harvesting piezoelectric-based elements in road traffic environments to extract (or "harvest") maximum power. In mechanical to electrical energy conversion, mechanical impacts or vibrational behavior are commonly used, and several major problems need to be solved to perform optimal harvesting systems including, but no limited to, primary energy source modeling, energy conversion, and energy storage. It is described a novel C.C. test bench that obtains, in an accurate and automatized process, a generalized linear equivalent electrical model of piezoelectric elements and piezoelectric based energy store harvesting circuits in order to scale energy generation with multiple devices integrated in different topologies.

  18. Piezoelectric energy harvesting computer controlled test bench.

    PubMed

    Vázquez-Rodriguez, M; Jiménez, F J; de Frutos, J; Alonso, D

    2016-09-01

    In this paper a new computer controlled (C.C.) laboratory test bench is presented. The patented test bench is made up of a C.C. road traffic simulator, C.C. electronic hardware involved in automating measurements, and test bench control software interface programmed in LabVIEW™. Our research is focused on characterizing electronic energy harvesting piezoelectric-based elements in road traffic environments to extract (or "harvest") maximum power. In mechanical to electrical energy conversion, mechanical impacts or vibrational behavior are commonly used, and several major problems need to be solved to perform optimal harvesting systems including, but no limited to, primary energy source modeling, energy conversion, and energy storage. It is described a novel C.C. test bench that obtains, in an accurate and automatized process, a generalized linear equivalent electrical model of piezoelectric elements and piezoelectric based energy store harvesting circuits in order to scale energy generation with multiple devices integrated in different topologies.

  19. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas

    PubMed Central

    Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming

    2015-01-01

    Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832

  20. Verification of road databases using multiple road models

    NASA Astrophysics Data System (ADS)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  1. Forest Roadidentification and Extractionof Through Advanced Log Matching Techniques

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Hu, B.; Quist, L.

    2017-10-01

    A novel algorithm for forest road identification and extraction was developed. The algorithm utilized Laplacian of Gaussian (LoG) filter and slope calculation on high resolution multispectral imagery and LiDAR data respectively to extract both primary road and secondary road segments in the forest area. The proposed method used road shape feature to extract the road segments, which have been further processed as objects with orientation preserved. The road network was generated after post processing with tensor voting. The proposed method was tested on Hearst forest, located in central Ontario, Canada. Based on visual examination against manually digitized roads, the majority of roads from the test area have been identified and extracted from the process.

  2. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  3. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  4. An AdaBoost Based Approach to Automatic Classification and Detection of Buildings Footprints, Vegetation Areas and Roads from Satellite Images

    NASA Astrophysics Data System (ADS)

    Gonulalan, Cansu

    In recent years, there has been an increasing demand for applications to monitor the targets related to land-use, using remote sensing images. Advances in remote sensing satellites give rise to the research in this area. Many applications ranging from urban growth planning to homeland security have already used the algorithms for automated object recognition from remote sensing imagery. However, they have still problems such as low accuracy on detection of targets, specific algorithms for a specific area etc. In this thesis, we focus on an automatic approach to classify and detect building foot-prints, road networks and vegetation areas. The automatic interpretation of visual data is a comprehensive task in computer vision field. The machine learning approaches improve the capability of classification in an intelligent way. We propose a method, which has high accuracy on detection and classification. The multi class classification is developed for detecting multiple objects. We present an AdaBoost-based approach along with the supervised learning algorithm. The combi- nation of AdaBoost with "Attentional Cascade" is adopted from Viola and Jones [1]. This combination decreases the computation time and gives opportunity to real time applications. For the feature extraction step, our contribution is to combine Haar-like features that include corner, rectangle and Gabor. Among all features, AdaBoost selects only critical features and generates in extremely efficient cascade structured classifier. Finally, we present and evaluate our experimental results. The overall system is tested and high performance of detection is achieved. The precision rate of the final multi-class classifier is over 98%.

  5. Automatic Blocked Roads Assessment after Earthquake Using High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Rastiveis, H.; Hosseini-Zirdoo, E.; Eslamizade, F.

    2015-12-01

    In 2010, an earthquake in the city of Port-au-Prince, Haiti, happened quite by chance an accident and killed over 300000 people. According to historical data such an earthquake has not occurred in the area. Unpredictability of earthquakes has necessitated the need for comprehensive mitigation efforts to minimize deaths and injuries. Blocked roads, caused by debris of destroyed buildings, may increase the difficulty of rescue activities. In this case, a damage map, which specifies blocked and unblocked roads, can be definitely helpful for a rescue team. In this paper, a novel method for providing destruction map based on pre-event vector map and high resolution world view II satellite images after earthquake, is presented. For this purpose, firstly in pre-processing step, image quality improvement and co-coordination of image and map are performed. Then, after extraction of texture descriptor from the image after quake and SVM classification, different terrains are detected in the image. Finally, considering the classification results, specifically objects belong to "debris" class, damage analysis are performed to estimate the damage percentage. In this case, in addition to the area objects in the "debris" class their shape should also be counted. The aforementioned process are performed on all the roads in the road layer.In this research, pre-event digital vector map and post-event high resolution satellite image, acquired by Worldview-2, of the city of Port-au-Prince, Haiti's capital, were used to evaluate the proposed method. The algorithm was executed on 1200×800 m2 of the data set, including 60 roads, and all the roads were labelled correctly. The visual examination have authenticated the abilities of this method for damage assessment of urban roads network after an earthquake.

  6. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  7. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  8. Road Damage Extraction from Post-Earthquake Uav Images Assisted by Vector Data

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Dou, A.

    2018-04-01

    Extraction of road damage information after earthquake has been regarded as urgent mission. To collect information about stricken areas, Unmanned Aerial Vehicle can be used to obtain images rapidly. This paper put forward a novel method to detect road damage and bring forward a coefficient to assess road accessibility. With the assistance of vector road data, image data of the Jiuzhaigou Ms7.0 Earthquake is tested. In the first, the image is clipped according to vector buffer. Then a large-scale segmentation is applied to remove irrelevant objects. Thirdly, statistics of road features are analysed, and damage information is extracted. Combining with the on-filed investigation, the extraction result is effective.

  9. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.

    PubMed

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-06-14

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.

  10. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  11. a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment

    NASA Astrophysics Data System (ADS)

    Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan

    2016-06-01

    Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.

  12. 11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND BUILT BY WES. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  13. Pavement type and wear condition classification from tire cavity acoustic measurements with artificial neural networks.

    PubMed

    Masino, Johannes; Foitzik, Michael-Jan; Frey, Michael; Gauterin, Frank

    2017-06-01

    Tire road noise is the major contributor to traffic noise, which leads to general annoyance, speech interference, and sleep disturbances. Standardized methods to measure tire road noise are expensive, sophisticated to use, and they cannot be applied comprehensively. This paper presents a method to automatically classify different types of pavement and the wear condition to identify noisy road surfaces. The methods are based on spectra of time series data of the tire cavity sound, acquired under normal vehicle operation. The classifier, an artificial neural network, correctly predicts three pavement types, whereas there are few bidirectional mis-classifications for two pavements, which have similar physical characteristics. The performance measures of the classifier to predict a new or worn out condition are over 94.6%. One could create a digital map with the output of the presented method. On the basis of these digital maps, road segments with a strong impact on tire road noise could be automatically identified. Furthermore, the method can estimate the road macro-texture, which has an impact on the tire road friction especially on wet conditions. Overall, this digital map would have a great benefit for civil engineering departments, road infrastructure operators, and for advanced driver assistance systems.

  14. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  15. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    In floodplains, massive investments in land reclamation have always played an important role in the past for flood protection. In these contexts, human alteration is reflected by artificial features ('Anthropogenic features'), such as banks, levees or road scarps, that constantly increase and change, in response to the rapid growth of human populations. For these areas, various existing and emerging applications require up-to-date, accurate and sufficiently attributed digital data, but such information is usually lacking, especially when dealing with large-scale applications. More recently, National or Local Mapping Agencies, in Europe, are moving towards the generation of digital topographic information that conforms to reality and are highly reliable and up to date. LiDAR Digital Terrain Models (DTMs) covering large areas are readily available for public authorities, and there is a greater and more widespread interest in the application of such information by agencies responsible for land management for the development of automated methods aimed at solving geomorphological and hydrological problems. Automatic feature recognition based upon DTMs can offer, for large-scale applications, a quick and accurate method that can help in improving topographic databases, and that can overcome some of the problems associated with traditional, field-based, geomorphological mapping, such as restrictions on access, and constraints of time or costs. Although anthropogenic features as levees and road scarps are artificial structures that actually do not belong to what is usually defined as the bare ground surface, they are implicitly embedded in digital terrain models (DTMs). Automatic feature recognition based upon DTMs, therefore, can offer a quick and accurate method that does not require additional data, and that can help in improving flood defense asset information, flood modeling or other applications. In natural contexts, morphological indicators derived from high resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  16. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models of buildings, trees, and ground as output. Building s and ground are textured by means of available images. This facilitates the orientation in the model and the interactive verification of the road objects that where initially classified as unknown. The three main modules of the texturing algorithm are: Pose estimation (if the videos are not geo-referenced), occlusion analysis, and texture synthesis.

  17. Object Detection from MMS Imagery Using Deep Learning for Generation of Road Orthophotos

    NASA Astrophysics Data System (ADS)

    Li, Y.; Sakamoto, M.; Shinohara, T.; Satoh, T.

    2018-05-01

    In recent years, extensive research has been conducted to automatically generate high-accuracy and high-precision road orthophotos using images and laser point cloud data acquired from a mobile mapping system (MMS). However, it is necessary to mask out non-road objects such as vehicles, bicycles, pedestrians and their shadows in MMS images in order to eliminate erroneous textures from the road orthophoto. Hence, we proposed a novel vehicle and its shadow detection model based on Faster R-CNN for automatically and accurately detecting the regions of vehicles and their shadows from MMS images. The experimental results show that the maximum recall of the proposed model was high - 0.963 (intersection-over-union > 0.7) - and the model could identify the regions of vehicles and their shadows accurately and robustly from MMS images, even when they contain varied vehicles, different shadow directions, and partial occlusions. Furthermore, it was confirmed that the quality of road orthophoto generated using vehicle and its shadow masks was significantly improved as compared to those generated using no masks or using vehicle masks only.

  18. Automatic Road Sign Inventory Using Mobile Mapping Systems

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.

    2016-06-01

    The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.

  19. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  20. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  1. Automated Steering Control Design by Visual Feedback Approach —System Identification and Control Experiments with a Radio-Controlled Car—

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yukihiro; Yoshii, Masakazu; Arai, Yasuhito; Adachi, Shuichi

    Advanced safety vehicle(ASV)assists drivers’ manipulation to avoid trafic accidents. A variety of researches on automatic driving systems are necessary as an element of ASV. Among them, we focus on visual feedback approach in which the automatic driving system is realized by recognizing road trajectory using image information. The purpose of this paper is to examine the validity of this approach by experiments using a radio-controlled car. First, a practical image processing algorithm to recognize white lines on the road is proposed. Second, a model of the radio-controlled car is built by system identication experiments. Third, an automatic steering control system is designed based on H∞ control theory. Finally, the effectiveness of the designed control system is examined via traveling experiments.

  2. Fatal accidents at railway level crossings in Great Britain 1946-2009.

    PubMed

    Evans, Andrew W

    2011-09-01

    This paper investigates fatal accidents and fatalities at level crossings in Great Britain over the 64-year period 1946-2009. The numbers of fatal accidents and fatalities per year fell by about 65% in the first half of that period, but since then have remained more or less constant at about 11 fatal accidents and 12 fatalities per year. At the same time other types of railway fatalities have fallen, so level crossings represent a growing proportion of the total. Nevertheless, Britain's level crossing safety performance remains good by international standards. The paper classifies level crossings into three types: railway-controlled, automatic, and passive. The safety performance of the three types of crossings has been very different. Railway-controlled crossings are the best-performing crossing type, with falling fatal accident rates. Automatic crossings have higher accident rates per crossing than railway controlled or passive crossings, and the accident rates have not decreased. Passive crossings are by far the most numerous, but many have low usage by road users. Their fatal accident rate has remained remarkably constant over the whole period at about 0.9 fatal accidents per 1000 crossings per year. A principal reason why fatal accidents and fatalities have not fallen in the second half of the period as they did in the first half is the increase in the number of automatic crossings, replacing the safer railway controlled crossings on some public roads. However, it does not follow that this replacement was a mistake, because automatic crossings have advantages over controlled crossings in reducing delays to road users and in not needing staff. Based on the trends for each type of crossing and for pedestrian and non-pedestrian accidents separately, in 2009 a mean of about 5% of fatal accidents were at railway controlled crossings, 52% were at automatic crossings, and 43% were at passive crossings. Fatalities had similar proportions. About 60% of fatalities were to pedestrians. A simple comparison of automatic railway level crossings and signalised road intersections found that in 2005 the numbers of fatalities per 1000 crossings or intersections were similar. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael

    2018-04-01

    Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.

  4. Complex Road Intersection Modelling Based on Low-Frequency GPS Track Data

    NASA Astrophysics Data System (ADS)

    Huang, J.; Deng, M.; Zhang, Y.; Liu, H.

    2017-09-01

    It is widely accepted that digital map becomes an indispensable guide for human daily traveling. Traditional road network maps are produced in the time-consuming and labour-intensive ways, such as digitizing printed maps and extraction from remote sensing images. At present, a large number of GPS trajectory data collected by floating vehicles makes it a reality to extract high-detailed and up-to-date road network information. Road intersections are often accident-prone areas and very critical to route planning and the connectivity of road networks is mainly determined by the topological geometry of road intersections. A few studies paid attention on detecting complex road intersections and mining the attached traffic information (e.g., connectivity, topology and turning restriction) from massive GPS traces. To the authors' knowledge, recent studies mainly used high frequency (1 s sampling rate) trajectory data to detect the crossroads regions or extract rough intersection models. It is still difficult to make use of low frequency (20-100 s) and easily available trajectory data to modelling complex road intersections geometrically and semantically. The paper thus attempts to construct precise models for complex road intersection by using low frequency GPS traces. We propose to firstly extract the complex road intersections by a LCSS-based (Longest Common Subsequence) trajectory clustering method, then delineate the geometry shapes of complex road intersections by a K-segment principle curve algorithm, and finally infer the traffic constraint rules inside the complex intersections.

  5. On-road anomaly detection by multimodal sensor analysis and multimedia processing

    NASA Astrophysics Data System (ADS)

    Orhan, Fatih; Eren, P. E.

    2014-03-01

    The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.

  6. Automatic detection of zebra crossings from mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  7. Autonomous navigation of structured city roads

    NASA Astrophysics Data System (ADS)

    Aubert, Didier; Kluge, Karl C.; Thorpe, Chuck E.

    1991-03-01

    Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.

  8. Markov random fields and graphs for uncertainty management and symbolic data fusion in an urban scene interpretation

    NASA Astrophysics Data System (ADS)

    Moissinac, Henri; Maitre, Henri; Bloch, Isabelle

    1995-11-01

    An image interpretation method is presented for the automatic processing of aerial pictures of a urban landscape. In order to improve the picture analysis, some a priori knowledge extracted from a geographic map is introduced. A coherent graph-based model of the city is built, starting with the road network. A global uncertainty management scheme has been designed in order to evaluate the final confidence we can have in the final results. This model and the uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels. The symbolic relationships linking the different kinds of elements are taken into account while propagating and combining the confidence measures along the interpretation process.

  9. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.

  10. Sample processor for the automatic extraction of families of compounds from liquid samples and/or homogenized solid samples suspended in a liquid

    NASA Technical Reports Server (NTRS)

    Jahnsen, Vilhelm J. (Inventor); Campen, Jr., Charles F. (Inventor)

    1980-01-01

    A sample processor and method for the automatic extraction of families of compounds, known as extracts, from liquid and/or homogenized solid samples are disclosed. The sample processor includes a tube support structure which supports a plurality of extraction tubes, each containing a sample from which families of compounds are to be extracted. The support structure is moveable automatically with respect to one or more extraction stations, so that as each tube is at each station a solvent system, consisting of a solvent and reagents, is introduced therein. As a result an extract is automatically extracted from the tube. The sample processor includes an arrangement for directing the different extracts from each tube to different containers, or to direct similar extracts from different tubes to the same utilization device.

  11. Integrated use of spatial and semantic relationships for extracting road networks from floating car data

    NASA Astrophysics Data System (ADS)

    Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue

    2012-10-01

    The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.

  12. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    NASA Astrophysics Data System (ADS)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  13. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  14. Pixel color feature enhancement for road signs detection

    NASA Astrophysics Data System (ADS)

    Zhang, Qieshi; Kamata, Sei-ichiro

    2010-02-01

    Road signs play an important role in our daily life which used to guide drivers to notice variety of road conditions and cautions. They provide important visual information that can help drivers operating their vehicles in a manner for enhancing traffic safety. The occurrence of some accidents can be reduced by using automatic road signs recognition system which can alert the drivers. This research attempts to develop a warning system to alert the drivers to notice the important road signs early enough to refrain road accidents from happening. For solving this, a non-linear weighted color enhancement method by pixels is presented. Due to the advantage of proposed method, different road signs can be detected from videos effectively. With suitably coefficients and operations, the experimental results have proved that the proposed method is robust, accurate and powerful in road signs detection.

  15. Identification of platinum nanoparticles in road dust leachate by single particle inductively coupled plasma-mass spectrometry.

    PubMed

    Folens, Karel; Van Acker, Thibaut; Bolea-Fernandez, Eduardo; Cornelis, Geert; Vanhaecke, Frank; Du Laing, Gijs; Rauch, Sebastien

    2018-02-15

    Elevated platinum (Pt) concentrations are found in road dust as a result of emissions from catalytic converters in vehicles. This study investigates the occurrence of Pt in road dust collected in Ghent (Belgium) and Gothenburg (Sweden). Total Pt contents, determined by tandem ICP-mass spectrometry (ICP-MS/MS), were in the range of 5 to 79ngg -1 , comparable to the Pt content in road dust of other medium-sized cities. Further sample characterization was performed by single particle (sp) ICP-MS following an ultrasonic extraction procedure using stormwater runoff for leaching. The method was found to be suitable for the characterization of Pt nanoparticles in road dust leachates. The extraction was optimized using road dust reference material BCR-723, for which an extraction efficiency of 2.7% was obtained by applying 144kJ of ultrasonic energy. Using this method, between 0.2% and 18% of the Pt present was extracted from road dust samples. spICP-MS analysis revealed that Pt in the leachate is entirely present as nanoparticles of sizes between 9 and 21nm. Although representing only a minor fraction of the total content in road dust, the nanoparticulate Pt leachate is most susceptible to biological uptake and hence most relevant in terms of bioavailability. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.

    DTIC Science & Technology

    1984-07-01

    the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system

  17. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  18. [Road Extraction in Remote Sensing Images Based on Spectral and Edge Analysis].

    PubMed

    Zhao, Wen-zhi; Luo, Li-qun; Guo, Zhou; Yue, Jun; Yu, Xue-ying; Liu, Hui; Wei, Jing

    2015-10-01

    Roads are typically man-made objects in urban areas. Road extraction from high-resolution images has important applications for urban planning and transportation development. However, due to the confusion of spectral characteristic, it is difficult to distinguish roads from other objects by merely using traditional classification methods that mainly depend on spectral information. Edge is an important feature for the identification of linear objects (e. g. , roads). The distribution patterns of edges vary greatly among different objects. It is crucial to merge edge statistical information into spectral ones. In this study, a new method that combines spectral information and edge statistical features has been proposed. First, edge detection is conducted by using self-adaptive mean-shift algorithm on the panchromatic band, which can greatly reduce pseudo-edges and noise effects. Then, edge statistical features are obtained from the edge statistical model, which measures the length and angle distribution of edges. Finally, by integrating the spectral and edge statistical features, SVM algorithm is used to classify the image and roads are ultimately extracted. A series of experiments are conducted and the results show that the overall accuracy of proposed method is 93% comparing with only 78% overall accuracy of the traditional. The results demonstrate that the proposed method is efficient and valuable for road extraction, especially on high-resolution images.

  19. Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques

    NASA Technical Reports Server (NTRS)

    Gardner, Margaret E.; Roberts, Dar A.; Funk, Chris; Noronha, Val

    2001-01-01

    Accurate road location and condition information are of primary importance in road infrastructure management. Additionally, spatially accurate and up-to-date road networks are essential in ambulance and rescue dispatch in emergency situations. However, accurate road infrastructure databases do not exist for vast areas, particularly in areas with rapid expansion. Currently, the US Department of Transportation (USDOT) extends great effort in field Global Positioning System (GPS) mapping and condition assessment to meet these informational needs. This methodology, though effective, is both time-consuming and costly, because every road within a DOT's jurisdiction must be field-visited to obtain accurate information. Therefore, the USDOT is interested in identifying new technologies that could help meet road infrastructure informational needs more effectively. Remote sensing provides one means by which large areas may be mapped with a high standard of accuracy and is a technology with great potential in infrastructure mapping. The goal of our research is to develop accurate road extraction techniques using high spatial resolution, fine spectral resolution imagery. Additionally, our research will explore the use of hyperspectral data in assessing road quality. Finally, this research aims to define the spatial and spectral requirements for remote sensing data to be used successfully for road feature extraction and road quality mapping. Our findings will facilitate the USDOT in assessing remote sensing as a new resource in infrastructure studies.

  20. Optimization-based method for automated road network extraction

    DOT National Transportation Integrated Search

    2001-09-18

    Automated road information extraction has significant applicability in transportation. : It provides a means for creating, maintaining, and updating transportation network databases that : are needed for purposes ranging from traffic management to au...

  1. Selected Aspects of the eCall Emergency Notification System

    NASA Astrophysics Data System (ADS)

    Kaminski, Tomasz; Nowacki, Gabriel; Mitraszewska, Izabella; Niezgoda, Michał; Kruszewski, Mikołaj; Kaminska, Ewa; Filipek, Przemysław

    2012-02-01

    The article describes problems associated with the road collision detection for the purpose of the automatic emergency call. At the moment collision is detected, the eCall device installed in the vehicle will automatically make contact with Emergency Notification Centre and send the set of essential information on the vehicle and the place of the accident. To activate the alarm, the information about the deployment of the airbags will not be used, because connection of the eCall device might interfere with the vehicle’s safety systems. It is necessary to develop a method enabling detection of the road collision, similar to the one used in airbag systems, and based on the signals available from the acceleration sensors.

  2. Landmark-aided localization for air vehicles using learned object detectors

    NASA Astrophysics Data System (ADS)

    DeAngelo, Mark Patrick

    This research presents two methods to localize an aircraft without GPS using fixed landmarks observed from an optical sensor. Onboard absolute localization is useful for vehicle navigation free from an external network. The objective is to achieve practical navigation performance using available autopilot hardware and a downward pointing camera. The first method uses computer vision cascade object detectors, which are trained to detect predetermined, distinct landmarks prior to a flight. The first method also concurrently explores aircraft localization using roads between landmark updates. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement updates when landmarks are detected. The sensor measurements and landmark coordinates extracted from the aircraft's camera images are combined into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities. The second method uses computer vision object detectors to detect abundant generic landmarks referred as buildings, fields, trees, and road intersections from aerial perspectives. Various landmark attributes and spatial relationships to other landmarks are used to help associate observed landmarks with reference landmarks. The computer vision algorithms automatically extract reference landmarks from maps, which are processed offline before a flight. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement corrections by processing aerial photos with similar generic landmark detection techniques. The method also combines sensor measurements and landmark coordinates into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities.

  3. An Approach to Extract Moving Objects from Mls Data Using a Volumetric Background Representation

    NASA Astrophysics Data System (ADS)

    Gehrung, J.; Hebel, M.; Arens, M.; Stilla, U.

    2017-05-01

    Data recorded by mobile LiDAR systems (MLS) can be used for the generation and refinement of city models or for the automatic detection of long-term changes in the public road space. Since for this task only static structures are of interest, all mobile objects need to be removed. This work presents a straightforward but powerful approach to remove the subclass of moving objects. A probabilistic volumetric representation is utilized to separate MLS measurements recorded by a Velodyne HDL-64E into mobile objects and static background. The method was subjected to a quantitative and a qualitative examination using multiple datasets recorded by a mobile mapping platform. The results show that depending on the chosen octree resolution 87-95% of the measurements are labeled correctly.

  4. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  5. Detection and 3D reconstruction of traffic signs from multiple view color images

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno

    2013-03-01

    3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for position and 4.5° for orientation.

  6. A Study on the Influence of Speed on Road Roughness Sensing: The SmartRoadSense Case †

    PubMed Central

    Alessandroni, Giacomo; Carini, Alberto; Lattanzi, Emanuele; Freschi, Valerio; Bogliolo, Alessandro

    2017-01-01

    SmartRoadSense is a crowdsensing project aimed at monitoring the conditions of the road surface. Using the sensors of a smartphone, SmartRoadSense monitors the vertical accelerations inside a vehicle traveling the road and extracts a roughness index conveying information about the road conditions. The roughness index and the smartphone GPS data are periodically sent to a central server where they are processed, associated with the specific road, and aggregated with data measured by other smartphones. This paper studies how the smartphone vertical accelerations and the roughness index are related to the vehicle speed. It is shown that the dependence can be locally approximated with a gamma (power) law. Extensive experimental results using data extracted from SmartRoadSense database confirm the gamma law relationship between the roughness index and the vehicle speed. The gamma law is then used for improving the SmartRoadSense data aggregation accounting for the effect of vehicle speed. PMID:28178224

  7. System for definition of the central-chest vasculature

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2009-02-01

    Accurate definition of the central-chest vasculature from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. For instance, the aorta and pulmonary artery help in automatic definition of the Mountain lymph-node stations for lung-cancer staging. This work presents a system for defining major vascular structures in the central chest. The system provides automatic methods for extracting the aorta and pulmonary artery and semi-automatic methods for extracting the other major central chest arteries/veins, such as the superior vena cava and azygos vein. Automatic aorta and pulmonary artery extraction are performed by model fitting and selection. The system also extracts certain vascular structure information to validate outputs. A semi-automatic method extracts vasculature by finding the medial axes between provided important sites. Results of the system are applied to lymph-node station definition and guidance of bronchoscopic biopsy.

  8. AUTOMOTIVE DIESEL MAINTENANCE 2. UNIT VII, AUTOMATIC TRANSMISSIONS--ALLISON, TORQUMATIC SERIES 5960 AND 6060 (PART I).

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 25-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF THE OPERATION AND MAINTENANCE OF SPECIFIC MODELS OF AUTOMATIC TRANSMISSIONS USED ON DIESEL POWERED VEHICLES. TOPICS ARE (1) GENERAL SPECIFICATION DATA, (2) OPTIONS FOR VARIOUS APPLICATIONS, (3) ROAD TEST INSTRUCTIONS, (4) IDENTIFICATION AND SPECIFICATION DATA, (5) ALLISON…

  9. Automatic, time-interval traffic counts for recreation area management planning

    Treesearch

    D. L. Erickson; C. J. Liu; H. K. Cordell

    1980-01-01

    Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...

  10. 19. DETAIL OF STAMP BATTERY AUTOMATIC FEEDER, LOOKING EAST. THIS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. DETAIL OF STAMP BATTERY AUTOMATIC FEEDER, LOOKING EAST. THIS IS THE MIDDLE OF THREE FEEDERS, ONE FOR EACH STAMP BATTERY. THE CHUTE (UPPER RIGHT) INTRODUCED THE CRUSHED ORE FROM THE ORE BIN. FLOW WAS CONTROLLED BY A SLIDING DOOR ON THE UPPER LEVEL. - Skidoo Mine, Park Route 38 (Skidoo Road), Death Valley Junction, Inyo County, CA

  11. Target recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  12. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  13. Road Development and the Geography of Hunting by an Amazonian Indigenous Group: Consequences for Wildlife Conservation

    PubMed Central

    Espinosa, Santiago; Branch, Lyn C.; Cueva, Rubén

    2014-01-01

    Protected areas are essential for conservation of wildlife populations. However, in the tropics there are two important factors that may interact to threaten this objective: 1) road development associated with large-scale resource extraction near or within protected areas; and 2) historical occupancy by traditional or indigenous groups that depend on wildlife for their survival. To manage wildlife populations in the tropics, it is critical to understand the effects of roads on the spatial extent of hunting and how wildlife is used. A geographical analysis can help us answer questions such as: How do roads affect spatial extent of hunting? How does market vicinity relate to local consumption and trade of bushmeat? How does vicinity to markets influence choice of game? A geographical analysis also can help evaluate the consequences of increased accessibility in landscapes that function as source-sink systems. We applied spatial analyses to evaluate the effects of increased landscape and market accessibility by road development on spatial extent of harvested areas and wildlife use by indigenous hunters. Our study was conducted in Yasuní Biosphere Reserve, Ecuador, which is impacted by road development for oil extraction, and inhabited by the Waorani indigenous group. Hunting activities were self-reported for 12–14 months and each kill was georeferenced. Presence of roads was associated with a two-fold increase of the extraction area. Rates of bushmeat extraction and trade were higher closer to markets than further away. Hunters located closer to markets concentrated their effort on large-bodied species. Our results clearly demonstrate that placing roads within protected areas can seriously reduce their capacity to sustain wildlife populations and potentially threaten livelihoods of indigenous groups who depend on these resources for their survival. Our results critically inform current policy debates regarding resource extraction and road building near or within protected areas. PMID:25489954

  14. Road development and the geography of hunting by an Amazonian indigenous group: consequences for wildlife conservation.

    PubMed

    Espinosa, Santiago; Branch, Lyn C; Cueva, Rubén

    2014-01-01

    Protected areas are essential for conservation of wildlife populations. However, in the tropics there are two important factors that may interact to threaten this objective: 1) road development associated with large-scale resource extraction near or within protected areas; and 2) historical occupancy by traditional or indigenous groups that depend on wildlife for their survival. To manage wildlife populations in the tropics, it is critical to understand the effects of roads on the spatial extent of hunting and how wildlife is used. A geographical analysis can help us answer questions such as: How do roads affect spatial extent of hunting? How does market vicinity relate to local consumption and trade of bushmeat? How does vicinity to markets influence choice of game? A geographical analysis also can help evaluate the consequences of increased accessibility in landscapes that function as source-sink systems. We applied spatial analyses to evaluate the effects of increased landscape and market accessibility by road development on spatial extent of harvested areas and wildlife use by indigenous hunters. Our study was conducted in Yasuní Biosphere Reserve, Ecuador, which is impacted by road development for oil extraction, and inhabited by the Waorani indigenous group. Hunting activities were self-reported for 12-14 months and each kill was georeferenced. Presence of roads was associated with a two-fold increase of the extraction area. Rates of bushmeat extraction and trade were higher closer to markets than further away. Hunters located closer to markets concentrated their effort on large-bodied species. Our results clearly demonstrate that placing roads within protected areas can seriously reduce their capacity to sustain wildlife populations and potentially threaten livelihoods of indigenous groups who depend on these resources for their survival. Our results critically inform current policy debates regarding resource extraction and road building near or within protected areas.

  15. Efficient road geometry identification from digital vector data

    NASA Astrophysics Data System (ADS)

    Andrášik, Richard; Bíl, Michal

    2016-07-01

    A new method for the automatic identification of road geometry from digital vector data is presented. The method is capable of efficiently identifying circular curves with their radii and tangents (straight sections). The average error of identification ranged from 0.01 to 1.30 % for precisely drawn data and 4.81 % in the case of actual road data with noise in the location of vertices. The results demonstrate that the proposed method is faster and more precise than commonly used techniques. This approach can be used by road administrators to complete their databases with information concerning the geometry of roads. It can also be utilized by transport engineers or traffic safety analysts to investigate the possible dependence of traffic accidents on road geometries. The method presented is applicable as well to railroads and rivers or other line features.

  16. Multi-Feature Based Information Extraction of Urban Green Space Along Road

    NASA Astrophysics Data System (ADS)

    Zhao, H. H.; Guan, H. Y.

    2018-04-01

    Green space along road of QuickBird image was studied in this paper based on multi-feature-marks in frequency domain. The magnitude spectrum of green along road was analysed, and the recognition marks of the tonal feature, contour feature and the road were built up by the distribution of frequency channels. Gabor filters in frequency domain were used to detect the features based on the recognition marks built up. The detected features were combined as the multi-feature-marks, and watershed based image segmentation were conducted to complete the extraction of green space along roads. The segmentation results were evaluated by Fmeasure with P = 0.7605, R = 0.7639, F = 0.7622.

  17. Automatic Keyword Extraction from Individual Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  18. Highway extraction from high resolution aerial photography using a geometric active contour model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Highway extraction and vehicle detection are two of the most important steps in traffic-flow analysis from multi-frame aerial photographs. The traditional method of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs, which is tedious and time-consuming. This research presents a new framework for semi-automatic highway extraction. The basis of the new framework is an improved geometric active contour (GAC) model. This novel model seeks to minimize an objective function that transforms a problem of propagation of regular curves into an optimization problem. The implementation of curve propagation is based on level set theory. By using an implicit representation of a two-dimensional curve, a level set approach can be used to deal with topological changes naturally, and the output is unaffected by different initial positions of the curve. However, the original GAC model, on which the new model is based, only incorporates boundary information into the curve propagation process. An error-producing phenomenon called leakage is inevitable wherever there is an uncertain weak edge. In this research, region-based information is added as a constraint into the original GAC model, thereby, giving this proposed method the ability of integrating both boundary and region-based information during the curve propagation. Adding the region-based constraint eliminates the leakage problem. This dissertation applies the proposed augmented GAC model to the problem of highway extraction from high-resolution aerial photography. First, an optimized stopping criterion is designed and used in the implementation of the GAC model. It effectively saves processing time and computations. Second, a seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. A seed point is usually placed at an end node of highway segments close to the boundary of the image or at a position where possible blocking may occur, such as at an overpass bridge or near vehicle crowds. These seed points can be automatically propagated throughout the entire highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction from a large orthophoto mosaic. In the process, vehicles on the highway extracted from mosaic were detected with an 83% success rate.

  19. Road detection in SAR images using a tensor voting algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian

    2007-11-01

    In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.

  20. Automatic Recognition of Road Signs

    NASA Astrophysics Data System (ADS)

    Inoue, Yasuo; Kohashi, Yuuichirou; Ishikawa, Naoto; Nakajima, Masato

    2002-11-01

    The increase in traffic accidents is becoming a serious social problem with the recent rapid traffic increase. In many cases, the driver"s carelessness is the primary factor of traffic accidents, and the driver assistance system is demanded for supporting driver"s safety. In this research, we propose the new method of automatic detection and recognition of road signs by image processing. The purpose of this research is to prevent accidents caused by driver"s carelessness, and call attention to a driver when the driver violates traffic a regulation. In this research, high accuracy and the efficient sign detecting method are realized by removing unnecessary information except for a road sign from an image, and detect a road sign using shape features. At first, the color information that is not used in road signs is removed from an image. Next, edges except for circular and triangle ones are removed to choose sign shape. In the recognition process, normalized cross correlation operation is carried out to the two-dimensional differentiation pattern of a sign, and the accurate and efficient method for detecting the road sign is realized. Moreover, the real-time operation in a software base was realized by holding down calculation cost, maintaining highly precise sign detection and recognition. Specifically, it becomes specifically possible to process by 0.1 sec(s)/frame using a general-purpose PC (CPU: Pentium4 1.7GHz). As a result of in-vehicle experimentation, our system could process on real time and has confirmed that detection and recognition of a sign could be performed correctly.

  1. Adaptive road crack detection system by pavement classification.

    PubMed

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement.

  2. Adaptive Road Crack Detection System by Pavement Classification

    PubMed Central

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement. PMID:22163717

  3. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  4. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  5. ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.

    ERIC Educational Resources Information Center

    Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.

    THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF…

  6. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  7. Automatic River Network Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  8. Paver automation for road surfacing

    NASA Astrophysics Data System (ADS)

    Tihonov, A.; Velichkin, V.

    2017-10-01

    The paper discusses factors that bear on the quality of motor road pavement as access roads and highways are built and used. A block diagram is proposed to organize elements of the automatic control system to control the asphalt paver’s mechanisms; the system is based on a microprocessor onboard controller to maintain preset elevation of the finishing plate; description of its operation principle is offered. The paper names primary converters to control the finishing plate elevation. A new control method is described to control the machine’s straight-line movement with GLONASS Satellite Positioning System (SPS) during operation.

  9. Oil industry and road traffic fatalities in contemporary Colombia.

    PubMed

    Tasciotti, Luca; Alejo, Didier; Romero, Andrés

    2016-12-01

    This paper studies the effects that oil extraction activities in Colombia have on the number of dead/injured people as a consequence of road-related accidents. Starting in 2004, the increasing exploitation of oil wells in some Colombian departments has worsened the traffic conditions due to the increased presence of trucks transporting crude oil from the wells to the refineries; this phenomenon has not been accompanied by an improvement in the road system with dramatic consequences in terms of road viability. The descriptive and empirical analysis presented here focuses on the period 2004-2011; results from descriptive statistics indicate a positive relationship between the presence of oil extraction activities and the number of either dead/injured people. Panel regressions for the period 2004-2011 confirm that, among other factors, the presence of oil-extraction activities did play a positive and statistical significant role in increasing the number of dead/injured people.

  10. Road extraction from aerial images using a region competition algorithm.

    PubMed

    Amo, Miriam; Martínez, Fernando; Torre, Margarita

    2006-05-01

    In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.

  11. Method and algorithm of automatic estimation of road surface type for variable damping control

    NASA Astrophysics Data System (ADS)

    Dąbrowski, K.; Ślaski, G.

    2016-09-01

    In this paper authors presented an idea of road surface estimation (recognition) on a base of suspension dynamic response signals statistical analysis. For preliminary analysis cumulated distribution function (CDF) was used, and some conclusion that various roads have responses values in a different ranges of limits for the same percentage of samples or for the same limits different percentages of samples are located within the range between limit values. That was the base for developed and presented algorithm which was tested using suspension response signals recorded during road test riding over various surfaces. Proposed algorithm can be essential part of adaptive damping control algorithm for a vehicle suspension or adaptive control strategy for suspension damping control.

  12. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.

    PubMed

    Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.

  13. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context

    PubMed Central

    Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076

  14. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    ERIC Educational Resources Information Center

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  15. Anomaly detection driven active learning for identifying suspicious tracks and events in WAMI video

    NASA Astrophysics Data System (ADS)

    Miller, David J.; Natraj, Aditya; Hockenbury, Ryler; Dunn, Katherine; Sheffler, Michael; Sullivan, Kevin

    2012-06-01

    We describe a comprehensive system for learning to identify suspicious vehicle tracks from wide-area motion (WAMI) video. First, since the road network for the scene of interest is assumed unknown, agglomerative hierarchical clustering is applied to all spatial vehicle measurements, resulting in spatial cells that largely capture individual road segments. Next, for each track, both at the cell (speed, acceleration, azimuth) and track (range, total distance, duration) levels, extreme value feature statistics are both computed and aggregated, to form summary (p-value based) anomaly statistics for each track. Here, to fairly evaluate tracks that travel across different numbers of spatial cells, for each cell-level feature type, a single (most extreme) statistic is chosen, over all cells traveled. Finally, a novel active learning paradigm, applied to a (logistic regression) track classifier, is invoked to learn to distinguish suspicious from merely anomalous tracks, starting from anomaly-ranked track prioritization, with ground-truth labeling by a human operator. This system has been applied to WAMI video data (ARGUS), with the tracks automatically extracted by a system developed in-house at Toyon Research Corporation. Our system gives promising preliminary results in highly ranking as suspicious aerial vehicles, dismounts, and traffic violators, and in learning which features are most indicative of suspicious tracks.

  16. Automatic Generalizability Method of Urban Drainage Pipe Network Considering Multi-Features

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Yang, Q.; Shao, J.

    2018-05-01

    Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  17. A combined road weather forecast system to prevent road ice formation in the Adige Valley (Italy)

    NASA Astrophysics Data System (ADS)

    Di Napoli, Claudia; Piazza, Andrea; Antonacci, Gianluca; Todeschini, Ilaria; Apolloni, Roberto; Pretto, Ilaria

    2016-04-01

    Road ice is a dangerous meteorological hazard to a nation's transportation system and economy. By reducing the pavement friction with vehicle tyres, ice formation on pavements increases accident risk and delays travelling times thus posing a serious threat to road users' safety and the running of economic activities. Keeping roads clear and open is therefore essential, especially in mountainous areas where ice is likely to form during the winter period. Winter road maintenance helps to restore road efficiency and security, and its benefits are up to 8 times the costs sustained for anti-icing strategies [1]. However, the optimization of maintenance costs and the reduction of the environmental damage from over-salting demand further improvements. These can be achieved by reliable road weather forecasts, and in particular by the prediction of road surface temperatures (RSTs). RST is one of the most important parameters in determining road surface conditions. It is well known from literature that ice forms on pavements in high-humidity conditions when RSTs are below 0°C. We have therefore implemented an automatic forecast system to predict critical RSTs on a test route along the Adige Valley complex terrain, in the Italian Alps. The system considers two physical models, each computing heat and energy fluxes between the road and the atmosphere. One is Reuter's radiative cooling model, which predicts RSTs at sunrise as a function of surface temperatures at sunset and the time passed since then [2]. One is METRo (Model of the Environment and Temperature of Roads), a road weather forecast software which also considers heat conduction through road material [3]. We have applied the forecast system to a network of road weather stations (road weather information system, RWIS) installed on the test route [4]. Road and atmospheric observations from RWIS have been used as initial conditions for both METRo and Reuter's model. In METRo observations have also been coupled to meteorological forecasts from ECMWF numerical prediction model. Overnight RST minima have then been estimated automatically in nowcast mode. In this presentation we show and discuss results and performances for the 2014-2015 and 2015-2016 winter seasons. Using evaluation indexes we demonstrate that combining METRo and Reuter's models into one single forecast system improves bias and accuracy by about 0.5°C. This study is supported by the LIFE11 ENV/IT/000002 CLEAN-ROADS project. The project aims to assess the environmental impact of salt de-icers in Trentino mountain region by supporting winter road management operations with meteorological information. [1] Thornes J.E. and Stephenson D.B., Meteorological Applications, 8:307 (2001) [2] Reuter H., Tellus, 3:141 (1951) [3] Crevier L.P. and Delage Y., Journal of applied meteorology, 40:2026 (2001) [4] Pretto I. et al., SIRWEC 2014 conference proceedings, ID:0019 (2014)

  18. Research related to roads in USDA experimental forests [Chapter 16

    Treesearch

    W. J. Elliot; P. J. Edwards; R. B. Foltz

    2014-01-01

    Forest roads are essential in experimental forests and rangelands (EFRs) to allow researchers and the public access to research sites and for fire suppression, timber extraction, and fuel management. Sediment from roads can adversely impact watershed health. Since the 1930s, the design and management of forest roads has addressed both access issues and watershed health...

  19. Autonomous navigation method for substation inspection robot based on travelling deviation

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting

    2017-06-01

    A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.

  20. Comparison of High and Low Density Airborne LIDAR Data for Forest Road Quality Assessment

    NASA Astrophysics Data System (ADS)

    Kiss, K.; Malinen, J.; Tokola, T.

    2016-06-01

    Good quality forest roads are important for forest management. Airborne laser scanning data can help create automatized road quality detection, thus avoiding field visits. Two different pulse density datasets have been used to assess road quality: high-density airborne laser scanning data from Kiihtelysvaara and low-density data from Tuusniemi, Finland. The field inventory mainly focused on the surface wear condition, structural condition, flatness, road side vegetation and drying of the road. Observations were divided into poor, satisfactory and good categories based on the current Finnish quality standards used for forest roads. Digital Elevation Models were derived from the laser point cloud, and indices were calculated to determine road quality. The calculated indices assessed the topographic differences on the road surface and road sides. The topographic position index works well in flat terrain only, while the standardized elevation index described the road surface better if the differences are bigger. Both indices require at least a 1 metre resolution. High-density data is necessary for analysis of the road surface, and the indices relate mostly to the surface wear and flatness. The classification was more precise (31-92%) than on low-density data (25-40%). However, ditch detection and classification can be carried out using the sparse dataset as well (with a success rate of 69%). The use of airborne laser scanning data can provide quality information on forest roads.

  1. Extraction of basic roadway information for non-state roads in Florida.

    DOT National Transportation Integrated Search

    2015-06-01

    The Florida Department of Transportation (FDOT) has continued to maintain a linear-referenced All-Roads map : that includes both state and non-state local roads. The state portion of the map could be populated with select data : from FDOTs R...

  2. Modelling Urban Noise in Citygml Ade: Case of the Netherlands

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Commandeur, T. J. F.; Stoter, J. E.

    2017-10-01

    Road traffic and industrial noise has become a major source of discomfort and annoyance among the residents in urban areas. More than 44 % of the EU population is regularly exposed to road traffic noise levels over 55 dB, which is currently the maximum accepted value prescribed by the Environmental Noise Directive for road traffic noise. With continuously increasing population and number of motor vehicles and industries, it is very unlikely to hope for noise levels to diminish in the near future. Therefore, it is necessary to monitor urban noise, so as to make mitigation plans and to deal with its adverse effects. The 2002/49/EC Environmental Noise Directive aims to determine the exposure of an individual to environmental noise through noise mapping. One of the most important steps in noise mapping is the creation of input data for simulation. At present, it is done semi-automatically (and sometimes even manually) by different companies in different ways and is very time consuming and can lead to errors in the data. In this paper, we present our approach for automatically creating input data for noise simulations. Secondly, we focus on using 3D city models for presenting the results of simulation for the noise arising from road traffic and industrial activities in urban areas. We implemented a few noise modelling standards for industrial and road traffic noise in CityGML by extending the existing Noise ADE with new objects and attributes. This research is a steping stone in the direction of standardising the input and output data for noise studies and for reconstructing the 3D data accordingly.

  3. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  4. Detection of Road Surface States from Tire Noise Using Neural Network Analysis

    NASA Astrophysics Data System (ADS)

    Kongrattanaprasert, Wuttiwat; Nomura, Hideyuki; Kamakura, Tomoo; Ueda, Koji

    This report proposes a new processing method for automatically detecting the states of road surfaces from tire noises of passing vehicles. In addition to multiple indicators of the signal features in the frequency domain, we propose a few feature indicators in the time domain to successfully classify the road states into four categories: snowy, slushy, wet, and dry states. The method is based on artificial neural networks. The proposed classification is carried out in multiple neural networks using learning vector quantization. The outcomes of the networks are then integrated by the voting decision-making scheme. Experimental results obtained from recorded signals for ten days in the snowy season demonstrated that an accuracy of approximately 90% can be attained for predicting road surface states using only tire noise data.

  5. Communication Systems for Dual Mode Transportation

    DOT National Transportation Integrated Search

    1974-02-01

    A program is underway to develop and demonstrate transportation systems based on vehicles which are capable of automatic operation on special guideways and manual operation on conventional roads. Adequate and reliable communications to and from vehic...

  6. ARAN/GIS video integration.

    DOT National Transportation Integrated Search

    2001-08-01

    The Maine Department of Transportation (MDOT) operates an Automatic : Road ANalyzer (ARAN) to collect roadway information to make pavement quality : assessments. Nearly 9000 miles of roadway data are collected by the ARAN in a : two-year cycle. The A...

  7. 40 CFR 51.362 - Motorist compliance enforcement program oversight.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... collection through the use of automatic data capture systems such as bar-code scanners or optical character... determination of compliance through parking lot surveys, road-side pull-overs, or other in-use vehicle...

  8. 40 CFR 51.362 - Motorist compliance enforcement program oversight.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... collection through the use of automatic data capture systems such as bar-code scanners or optical character... determination of compliance through parking lot surveys, road-side pull-overs, or other in-use vehicle...

  9. Extraction of basic roadway information for non-state roads in Florida : [summary].

    DOT National Transportation Integrated Search

    2015-07-01

    The Florida Department of Transportation (FDOT) maintains a map of all the roads in Florida, : containing over one and a half million road links. For planning purposes, a wide variety : of information, such as stop lights, signage, lane number, and s...

  10. Road dust and its effect on human health: a literature review

    PubMed Central

    2018-01-01

    The purpose of this study was to determine the effects of road dust on human health. A PubMed search was used to extract references that included the words “road dust” and “health” or “fugitive dust” and “health” in the title or abstract. A total of 46 references were extracted and selected for review after the primary screening of 949 articles. The respiratory system was found to be the most affected system in the human body. Lead, platinum-group elements (platinum, rhodium, and bohrium), aluminum, zinc, vanadium, and polycyclic aromatic hydrocarbons were the components of road dust that were most frequently referenced in the articles reviewed. Road dust was found to have harmful effects on the human body, especially on the respiratory system. To determine the complex mechanism of action of various components of road dust on the human body and the results thereof, the authors recommend a further meta-analysis and extensive risk-assessment research into the health impacts of dust exposure. PMID:29642653

  11. Distributed Scene Analysis For Autonomous Road Vehicle Guidance

    NASA Astrophysics Data System (ADS)

    Mysliwetz, Birger D.; Dickmanns, E. D.

    1987-01-01

    An efficient distributed processing scheme has been developed for visual road boundary tracking by 'VaMoRs', a testbed vehicle for autonomous mobility and computer vision. Ongoing work described here is directed to improving the robustness of the road boundary detection process in the presence of shadows, ill-defined edges and other disturbing real world effects. The system structure and the techniques applied for real-time scene analysis are presented along with experimental results. All subfunctions of road boundary detection for vehicle guidance, such as edge extraction, feature aggregation and camera pointing control, are executed in parallel by an onboard multiprocessor system. On the image processing level local oriented edge extraction is performed in multiple 'windows', tighly controlled from a hierarchically higher, modelbased level. The interpretation process involving a geometric road model and the observer's relative position to the road boundaries is capable of coping with ambiguity in measurement data. By using only selected measurements to update the model parameters even high noise levels can be dealt with and misleading edges be rejected.

  12. High-Fidelity Roadway Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit

    2010-01-01

    Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.

  13. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  14. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Historical maintenance relevant information road-map for a self-learning maintenance prediction procedural approach

    NASA Astrophysics Data System (ADS)

    Morales, Francisco J.; Reyes, Antonio; Cáceres, Noelia; Romero, Luis M.; Benitez, Francisco G.; Morgado, Joao; Duarte, Emanuel; Martins, Teresa

    2017-09-01

    A large percentage of transport infrastructures are composed of linear assets, such as roads and rail tracks. The large social and economic relevance of these constructions force the stakeholders to ensure a prolonged health/durability. Even though, inevitable malfunctioning, breaking down, and out-of-service periods arise randomly during the life cycle of the infrastructure. Predictive maintenance techniques tend to diminish the appearance of unpredicted failures and the execution of needed corrective interventions, envisaging the adequate interventions to be conducted before failures show up. This communication presents: i) A procedural approach, to be conducted, in order to collect the relevant information regarding the evolving state condition of the assets involved in all maintenance interventions; this reported and stored information constitutes a rich historical data base to train Machine Learning algorithms in order to generate reliable predictions of the interventions to be carried out in further time scenarios. ii) A schematic flow chart of the automatic learning procedure. iii) Self-learning rules from automatic learning from false positive/negatives. The description, testing, automatic learning approach and the outcomes of a pilot case are presented; finally some conclusions are outlined regarding the methodology proposed for improving the self-learning predictive capability.

  16. Automatic Extraction of Drug Adverse Effects from Product Characteristics (SPCs): A Text Versus Table Comparison.

    PubMed

    Lamy, Jean-Baptiste; Ugon, Adrien; Berthelot, Hélène

    2016-01-01

    Potential adverse effects (AEs) of drugs are described in their summary of product characteristics (SPCs), a textual document. Automatic extraction of AEs from SPCs is useful for detecting AEs and for building drug databases. However, this task is difficult because each AE is associated with a frequency that must be extracted and the presentation of AEs in SPCs is heterogeneous, consisting of plain text and tables in many different formats. We propose a taxonomy for the presentation of AEs in SPCs. We set up natural language processing (NLP) and table parsing methods for extracting AEs from texts and tables of any format, and evaluate them on 10 SPCs. Automatic extraction performed better on tables than on texts. Tables should be recommended for the presentation of the AEs section of the SPCs.

  17. Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.

  18. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  19. Information retrieval and terminology extraction in online resources for patients with diabetes.

    PubMed

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall, precision and f-measure.

  20. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images

    NASA Astrophysics Data System (ADS)

    Tournaire, O.; Paparoditis, N.

    Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our approach is relevant and accurate as it can handle the most frequent layouts of dashed lines. Some issues, for instance, such as the relative weighting of both terms of the energy are also discussed in the conclusion.

  1. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    NASA Astrophysics Data System (ADS)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org

  2. 78 FR 50051 - Notice of Availability of the Final Environmental Impact Statement for the Tarmac King Road...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-16

    ... DEPARTMENT OF DEFENSE Department of the Army, Corps of Engineers Notice of Availability of the Final Environmental Impact Statement for the Tarmac King Road Limestone Mine Proposed in Levy County... from limestone extraction, material stockpiling, roads, and other infrastructure over a period of...

  3. An estimate of the effectiveness of an in-vehicle automatic collision notification system in reducing road crash fatalities in South Australia.

    PubMed

    Ponte, G; Ryan, G A; Anderson, R W G

    2016-01-01

    The aim of this study was to estimate the potential effectiveness of an in-vehicle automatic collision notification (ACN) system in reducing all road crash fatalities in South Australia (SA). For the years 2008 to 2009, traffic accident reporting system (TARS) data, emergency medical services (EMS) road crash dispatch data, and coroner's reports were matched and examined. This was done to initially determine the extent to which there were differences between the reported time of a fatal road crash in the mass crash data and the time EMS were notified and dispatched. In the subset of fatal crashes where there was a delay, injuries detailed by a forensic pathologist in individual coroner's reports were examined to determine the likelihood of survival had there not been a delay in emergency medical assistance. In 25% (N = 53) of fatalities in SA in the period 2008 to 2009, there was a delay in the notification of the crash event, and hence dispatch of EMS, that exceeded 10 min. In the 2-year crash period, 5 people were likely to have survived through more prompt crash notification enabling quicker emergency medical assistance. Additionally, 3 people potentially would have survived if surgical intervention (or emergency medical assistance to sustain life until surgery) occurred more promptly. The minimum effectiveness rate of an ACN system in SA with full deployment is likely to be in the range of 2.4 to 3.8% of all road crash fatalities involving all vehicle types and all vulnerable road users (pedestrians, cyclists, and motorcyclists) from 2008 to 2009. Considering only passenger vehicle occupants, the benefit is likely to be 2.6 to 4.6%. These fatality reductions could only have been achieved through earlier notification of each crash and their location to enable a quicker medical response. This might be achievable through a fully deployed in-vehicle ACN system.

  4. Advances in detecting localized road damage due to sinkholes induced by engineering works using high resolution RASARSAT-2 data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Zebker, H. A.; Lakshmi, V.

    2016-12-01

    Sinkholes often occur in karst terrains such as found in central and eastern Pennsylvania. Voids produced by dissolution of carbonate rocks can result in soil transport leading to localized, gradual or rapid, sinking of the land surface. A cluster of sinkholes developed in 2000 around a small rural community beside Bushkill creek near a limestone quarry, and severely destroyed road bridges and railway tracks. At a cost of $6 million, the Pennsylvania DoT replaced the bridge, which was damaged again in 2004 by newly developed sinkholes likely associated with quarry's pumping activity. Here we present high-resolution spaceborne interferometric radar images of sinkhole development on this community. We show that this technique may be used to monitor regions with high sinkhole damage risk and assist future infrastructure route planning, especially in rural areas where hydrogeologic information is limited. Specifically, we processed 66 RADARSAT-2 interferograms to extract deformation occurred over Bushkill creek between Jun. 2015 and Mar. 2016 with a temporal resolution of 24 days. We advanced recent persistent scatterer techniques to preserve meter-level spatial resolution in the interferograms while minimizing temporal decorrelation and phase unwrapping error. We observe periodic deformation due to pumping activity at the quarry and localized subsidence along Bushkill creek that is co-located with recent reported sinkholes. We plan to use the automatic processing techniques developed for this study to study road damage in another region in Pennsylvania, along Lewiston Narrows, and also to monitor urban infrastructure improvements in Seattle, both again with RASARSAT-2 data. Our results demonstrate that recent advances in satellite geodesy can be transferred to benefit society beyond the science community.

  5. Automatic road sign detecion and classification based on support vector machines and HOG descriptos

    NASA Astrophysics Data System (ADS)

    Adam, A.; Ioannidis, C.

    2014-05-01

    This paper examines the detection and classification of road signs in color-images acquired by a low cost camera mounted on a moving vehicle. A new method for the detection and classification of road signs is proposed based on color based detection, in order to locate regions of interest. Then, a circular Hough transform is applied to complete detection taking advantage of the shape properties of the road signs. The regions of interest are finally represented using HOG descriptors and are fed into trained Support Vector Machines (SVMs) in order to be recognized. For the training procedure, a database with several training examples depicting Greek road sings has been developed. Many experiments have been conducted and are presented, to measure the efficiency of the proposed methodology especially under adverse weather conditions and poor illumination. For the experiments training datasets consisting of different number of examples were used and the results are presented, along with some possible extensions of this work.

  6. Multiresolution texture analysis applied to road surface inspection

    NASA Astrophysics Data System (ADS)

    Paquis, Stephane; Legeay, Vincent; Konik, Hubert; Charrier, Jean

    1999-03-01

    Technological advances provide now the opportunity to automate the pavement distress assessment. This paper deals with an approach for achieving an automatic vision system for road surface classification. Road surfaces are composed of aggregates, which have a particular grain size distribution and a mortar matrix. From various physical properties and visual aspects, four road families are generated. We present here a tool using a pyramidal process with the assumption that regions or objects in an image rise up because of their uniform texture. Note that the aim is not to compute another statistical parameter but to include usual criteria in our method. In fact, the road surface classification uses a multiresolution cooccurrence matrix and a hierarchical process through an original intensity pyramid, where a father pixel takes the minimum gray level value of its directly linked children pixels. More precisely, only matrix diagonal is taken into account and analyzed along the pyramidal structure, which allows the classification to be made.

  7. Behavioral aspects of automatic vehicle guidance : relationship between headway and driver comfort

    DOT National Transportation Integrated Search

    1997-01-01

    Automation of road traffic has the potential to greatly improve the performance of traffic systems. The acceptance of automated driving may play an important role in the feasibility of automated vehicle guidance (AVG), comparable to automated highway...

  8. 78 FR 20714 - Union Pacific Railroad Company-Abandonment Exemption-in Dunn County, WI.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ... as roads or highways or other forms of mass transportation, UP states that the right-of-way is... will automatically expire. Board decisions and notices are available on our Web site at `` www.stb.dot...

  9. Automatic vehicle identification technology applications to toll collection services

    DOT National Transportation Integrated Search

    1997-01-01

    Intelligent transportation systems technologies are being developed and applied through transportation systems in the United States. An example of this type of innovation can be seen on toll roads where a driver is required to deposit a toll in order...

  10. A Risk Assessment System with Automatic Extraction of Event Types

    NASA Astrophysics Data System (ADS)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  11. Detecting subsurface features and distresses of roadways and bridge decks with ground penetrating radar at traffic speed

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Birken, Ralf; Wang, Ming L.

    2017-04-01

    This paper presents the detections of the subsurface features and distresses in roadways and bridge decks from ground penetrating radar (GPR) data collected at traffic speed. This GPR system is operated at 2 GHz with a penetration depth of 60 cm in common road materials. The system can collect 1000 traces a second, has a large dynamic range and compact packaging. Using a four channel GPR array, dense spatial coverage can be achieved in both longitudinal and transversal directions. The GPR data contains significant information about subsurface features and distresses resulting from dielectric difference, such as distinguishing new and old asphalt, identification of the asphalt-reinforced concrete (RC) interface, and detection of rebar in bridge decks. For roadways, the new and old asphalt layers are distinguished from the dielectric and thickness discontinuities. The results are complemented by surface images of the roads taken by a video camera. For bridge decks, the asphalt-RC interface is automatically detected by a cross correlation and Hilbert transform algorithms, and the layer properties (e.g., dielectric constant and thickness) can be identified. Moreover, the rebar hyperbolas can be visualized from the GPR B-scan images. In addition, the reflection amplitude from steel rebar can be extracted. It is possible to estimate the rebar corrosion level in concrete from the distribution of the rebar reflection amplitudes.

  12. 2D Automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  13. 2D automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  14. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  15. Automatic sentence extraction for the detection of scientific paper relations

    NASA Astrophysics Data System (ADS)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  16. A video-based real-time adaptive vehicle-counting system for urban roads.

    PubMed

    Liu, Fei; Zeng, Zhiyuan; Jiang, Rong

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  17. A video-based real-time adaptive vehicle-counting system for urban roads

    PubMed Central

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984

  18. Case management and adherence to an online disease management system.

    PubMed

    Robertson, Lucy; Smith, Michael; Tannenbaum, Dennis

    2005-01-01

    Non-adherence to treatment presents a significant obstacle to achieving favourable health outcomes. We have studied consumers' adherence to an online disease management system for depression, called Recovery Road. Recovery Road was implemented on a pilot basis for mental health care in Western Australia. Recovery Road was available for use by consumers and clinicians to augment usual treatment. One hundred and thirty consumers who had been diagnosed with major depression were enrolled. Consumers who used Recovery Road (n = 98) were provided with education, progress monitoring, e-consultation, e-diary and online evidenced-based therapy. Consumers received either standard, automated adherence reminders by email (n = 69), or case management, which included personalized email and telephone follow-up in response to non-adherence (n = 29). After the first eight sessions, the adherence was 84% in the case management group and 55% in the automatic reminders group. The results suggest that case management increases adherence to online disease management systems.

  19. Collaborative human-machine analysis to disambiguate entities in unstructured text and structured datasets

    NASA Astrophysics Data System (ADS)

    Davenport, Jack H.

    2016-05-01

    Intelligence analysts demand rapid information fusion capabilities to develop and maintain accurate situational awareness and understanding of dynamic enemy threats in asymmetric military operations. The ability to extract relationships between people, groups, and locations from a variety of text datasets is critical to proactive decision making. The derived network of entities must be automatically created and presented to analysts to assist in decision making. DECISIVE ANALYTICS Corporation (DAC) provides capabilities to automatically extract entities, relationships between entities, semantic concepts about entities, and network models of entities from text and multi-source datasets. DAC's Natural Language Processing (NLP) Entity Analytics model entities as complex systems of attributes and interrelationships which are extracted from unstructured text via NLP algorithms. The extracted entities are automatically disambiguated via machine learning algorithms, and resolution recommendations are presented to the analyst for validation; the analyst's expertise is leveraged in this hybrid human/computer collaborative model. Military capability is enhanced by these NLP Entity Analytics because analysts can now create/update an entity profile with intelligence automatically extracted from unstructured text, thereby fusing entity knowledge from structured and unstructured data sources. Operational and sustainment costs are reduced since analysts do not have to manually tag and resolve entities.

  20. Roads Data Conflation Using Update High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Abdollahi, A.; Riyahi Bakhtiari, H. R.

    2017-11-01

    Urbanization, industrialization and modernization are rapidly growing in developing countries. New industrial cities, with all the problems brought on by rapid population growth, need infrastructure to support the growth. This has led to the expansion and development of the road network. A great deal of road network data has made by using traditional methods in the past years. Over time, a large amount of descriptive information has assigned to these map data, but their geometric accuracy and precision is not appropriate to today's need. In this regard, the improvement of the geometric accuracy of road network data by preserving the descriptive data attributed to them and updating of the existing geo databases is necessary. Due to the size and extent of the country, updating the road network maps using traditional methods is time consuming and costly. Conversely, using remote sensing technology and geographic information systems can reduce costs, save time and increase accuracy and speed. With increasing the availability of high resolution satellite imagery and geospatial datasets there is an urgent need to combine geographic information from overlapping sources to retain accurate data, minimize redundancy, and reconcile data conflicts. In this research, an innovative method for a vector-to-imagery conflation by integrating several image-based and vector-based algorithms presented. The SVM method for image classification and Level Set method used to extract the road the different types of road intersections extracted from imagery using morphological operators. For matching the extracted points and to find the corresponding points, matching function which uses the nearest neighborhood method was applied. Finally, after identifying the matching points rubber-sheeting method used to align two datasets. Two residual and RMSE criteria used to evaluate accuracy. The results demonstrated excellent performance. The average root-mean-square error decreased from 11.8 to 4.1 m.

  1. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    PubMed

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  2. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    PubMed Central

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  3. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  4. Extracting decision rules from police accident reports through decision trees.

    PubMed

    de Oña, Juan; López, Griselda; Abellán, Joaquín

    2013-01-01

    Given the current number of road accidents, the aim of many road safety analysts is to identify the main factors that contribute to crash severity. To pinpoint those factors, this paper shows an application that applies some of the methods most commonly used to build decision trees (DTs), which have not been applied to the road safety field before. An analysis of accidents on rural highways in the province of Granada (Spain) between 2003 and 2009 (both inclusive) showed that the methods used to build DTs serve our purpose and may even be complementary. Applying these methods has enabled potentially useful decision rules to be extracted that could be used by road safety analysts. For instance, some of the rules may indicate that women, contrary to men, increase their risk of severity under bad lighting conditions. The rules could be used in road safety campaigns to mitigate specific problems. This would enable managers to implement priority actions based on a classification of accidents by types (depending on their severity). However, the primary importance of this proposal is that other databases not used here (i.e. other infrastructure, roads and countries) could be used to identify unconventional problems in a manner easy for road safety managers to understand, as decision rules. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. A prototype system to support evidence-based practice.

    PubMed

    Demner-Fushman, Dina; Seckman, Charlotte; Fisher, Cheryl; Hauser, Susan E; Clayton, Jennifer; Thoma, George R

    2008-11-06

    Translating evidence into clinical practice is a complex process that depends on the availability of evidence, the environment into which the research evidence is translated, and the system that facilitates the translation. This paper presents InfoBot, a system designed for automatic delivery of patient-specific information from evidence-based resources. A prototype system has been implemented to support development of individualized patient care plans. The prototype explores possibilities to automatically extract patients problems from the interdisciplinary team notes and query evidence-based resources using the extracted terms. Using 4,335 de-identified interdisciplinary team notes for 525 patients, the system automatically extracted biomedical terminology from 4,219 notes and linked resources to 260 patient records. Sixty of those records (15 each for Pediatrics, Oncology & Hematology, Medical & Surgical, and Behavioral Health units) have been selected for an ongoing evaluation of the quality of automatically proactively delivered evidence and its usefulness in development of care plans.

  6. A Prototype System to Support Evidence-based Practice

    PubMed Central

    Demner-Fushman, Dina; Seckman, Charlotte; Fisher, Cheryl; Hauser, Susan E.; Clayton, Jennifer; Thoma, George R.

    2008-01-01

    Translating evidence into clinical practice is a complex process that depends on the availability of evidence, the environment into which the research evidence is translated, and the system that facilitates the translation. This paper presents InfoBot, a system designed for automatic delivery of patient-specific information from evidence-based resources. A prototype system has been implemented to support development of individualized patient care plans. The prototype explores possibilities to automatically extract patients’ problems from the interdisciplinary team notes and query evidence-based resources using the extracted terms. Using 4,335 de-identified interdisciplinary team notes for 525 patients, the system automatically extracted biomedical terminology from 4,219 notes and linked resources to 260 patient records. Sixty of those records (15 each for Pediatrics, Oncology & Hematology, Medical & Surgical, and Behavioral Health units) have been selected for an ongoing evaluation of the quality of automatically proactively delivered evidence and its usefulness in development of care plans. PMID:18998835

  7. Application of Magnetic Nanoparticles in Pretreatment Device for POPs Analysis in Water

    NASA Astrophysics Data System (ADS)

    Chu, Dongzhi; Kong, Xiangfeng; Wu, Bingwei; Fan, Pingping; Cao, Xuan; Zhang, Ting

    2018-01-01

    In order to reduce process time and labour force of POPs pretreatment, and solve the problem that extraction column was easily clogged, the paper proposed a new technology of extraction and enrichment which used magnetic nanoparticles. Automatic pretreatment system had automatic sampling unit, extraction enrichment unit and elution enrichment unit. The paper briefly introduced the preparation technology of magnetic nanoparticles, and detailly introduced the structure and control system of automatic pretreatment system. The result of magnetic nanoparticles mass recovery experiments showed that the system had POPs analysis preprocessing capability, and the recovery rate of magnetic nanoparticles were over 70%. In conclusion, the author proposed three points optimization recommendation.

  8. Self-adaptive road tracking in hyperspectral data for C-IED

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Gross, Wolfgang; Middelmann, Wolfgang

    2012-09-01

    For Counter Improvised Explosive Devices purposes, main routes including their vicinity are surveyed. In future military operations, small hyperspectral sensors will be used for ground covering reconnaissance, complementing images from infrared and high resolution sensors. They will be mounted on unmanned airborne vehicles and are used for on-line monitoring of convoy routes. Depending of the proximity to the road, different regions can be defined for threat assessment. Automatic road tracking can help choosing the correct areas of interest. Often, the exact discrimination between road and surroundings fails in conventional methods due to low contrast in pan-chromatic images at the road boundaries or occlusions. In this contribution, a novel real-time lock-on road tracking algorithm is introduced. It uses hyperspectral data and is specifically designed to address the afore- mentioned deficiencies of conventional methods. Local features are calculated from the high-resolution spectral signatures. They describe the similarity to the actual road cover and to either roadside. Classification is per- formed to discriminate the signatures. To improve robustness against variations in road cover, the classification results are used to progressively adapt the road and roadside classes. Occlusions are treated by predicting the course of the road and comparing the signatures in the target area to previously determined road cover signa- tures. The algorithm can be easily extended to show regions of varying threat, depending on the distance to the road. Thus, complex anomaly detectors and classification algorithms can be applied to a reduced data set. First experiments were performed for AISA Eagle II (400nm - 970nm) and AISA Hawk (970nm - 2450nm) data

  9. Impact of translation on named-entity recognition in radiology texts

    PubMed Central

    Pedro, Vasco

    2017-01-01

    Abstract Radiology reports describe the results of radiography procedures and have the potential of being a useful source of information which can bring benefits to health care systems around the world. One way to automatically extract information from the reports is by using Text Mining tools. The problem is that these tools are mostly developed for English and reports are usually written in the native language of the radiologist, which is not necessarily English. This creates an obstacle to the sharing of Radiology information between different communities. This work explores the solution of translating the reports to English before applying the Text Mining tools, probing the question of what translation approach should be used. We created MRRAD (Multilingual Radiology Research Articles Dataset), a parallel corpus of Portuguese research articles related to Radiology and a number of alternative translations (human, automatic and semi-automatic) to English. This is a novel corpus which can be used to move forward the research on this topic. Using MRRAD we studied which kind of automatic or semi-automatic translation approach is more effective on the Named-entity recognition task of finding RadLex terms in the English version of the articles. Considering the terms extracted from human translations as our gold standard, we calculated how similar to this standard were the terms extracted using other translations. We found that a completely automatic translation approach using Google leads to F-scores (between 0.861 and 0.868, depending on the extraction approach) similar to the ones obtained through a more expensive semi-automatic translation approach using Unbabel (between 0.862 and 0.870). To better understand the results we also performed a qualitative analysis of the type of errors found in the automatic and semi-automatic translations. Database URL: https://github.com/lasigeBioTM/MRRAD PMID:29220455

  10. Research on Automatic Classification, Indexing and Extracting. Annual Progress Report.

    ERIC Educational Resources Information Center

    Baker, F.T.; And Others

    In order to contribute to the success of several studies for automatic classification, indexing and extracting currently in progress, as well as to further the theoretical and practical understanding of textual item distributions, the development of a frequency program capable of supplying these types of information was undertaken. The program…

  11. Intelligence Surveillance And Reconnaissance Full Motion Video Automatic Anomaly Detection Of Crowd Movements: System Requirements For Airborne Application

    DTIC Science & Technology

    The collection of Intelligence , Surveillance, and Reconnaissance (ISR) Full Motion Video (FMV) is growing at an exponential rate, and the manual... intelligence for the warfighter. This paper will address the question of how can automatic pattern extraction, based on computer vision, extract anomalies in

  12. Investigations of Section Speed on Rural Roads in Podlaskie Voivodeship

    NASA Astrophysics Data System (ADS)

    Ziolkowski, Robert

    2017-10-01

    Excessive speed is one of the most important factors considered in road safety and not only affects the severity of a crash but is also related to the risk of being involved in a crash. In Poland the problem of speeding drivers is widely common. Properly recognized and defined drivers behaviour is the base for any effective activities taken towards road safety improvements. Effective enforcement of speed limits especially on rural road plays an important role but conducted speed investigations basically focus on spot speed omitting travel speed on longer sections of roads which can better reflect driver’s behaviour. Possible solutions for rural roads are limited to administrative means of speed limitations, installations of speed cameras and police enforcement. However due to their limited proved effectiveness new solutions are still being sought. High expectations are associated with the sectional speed system that has recently been introduced in Poland and covered a number of national road sections. The aim of this paper is to investigate section speed on chosen regional and district roads located in Podlaskie Voivodeship. Test sections included 19 road segments varied in terms of functional and geometric characteristics. Speed measurements on regional and district roads were performed with the use of a set of two ANPR (Automatic Number Plate Recognition) cameras. Conducted research allowed to compare driver’s behaviour in terms of travel speed depending on roads’ functional classification as well as to evaluate the influence of chosen geometric parameters on average section speed.

  13. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  14. Fugitive dust from vehicles traveling on unpaved roads

    Treesearch

    Thomas A. Cuscino; Robert Jennings Heinsohn; Clotworthy, Jr. Birnie

    1977-01-01

    A model has been developed for estimating concentrations of fugitive dust downwind of an unpaved road within a factor of 2 for most cases. The model allows for winds oblique to the road and also for extraction of fugitive dust from the plume as it diffuses to the ground. Experiments were performed to determine the accuracy of the model in estimating downwind...

  15. Smart Cruise Control: UAV sensor operator intent estimation and its application

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren; Kumar, Rakesh

    2006-05-01

    Due to their long endurance, superior mobility and the low risk posed to the pilot and sensor operator, UAVs have become the preferred platform for persistent ISR missions. However, currently most UAV based ISR missions are conducted through manual operation. Event the simplest tasks, such as vehicle tracking, route reconnaissance and site monitoring, need the sensor operator's undivided attention and constant adjustment of the sensor control. The lack of autonomous behaviour greatly limits of the effectiveness and the capability of UAV-based ISR, especially the use of a large number of UAVs simultaneously. Although fully autonomous UAV based ISR system is desirable, it is still a distant dream due to the complexity and diversity of combat and ISR missions. In this paper, we propose a Smart Cruise Control system that can learn UAV sensor operator's intent and use it to complete tasks automatically, such as route reconnaissance and site monitoring. Using an operator attention model, the proposed system can estimate the operator's intent from how they control the sensor (e.g. camera) and the content of the imagery that is acquired. Therefore, for example, from initially manually controlling the UAV sensor to follow a road, the system can learn not only the preferred operation, "tracking", but also the road appearance, "what to track" in real-time. Then, the learnt models of both road and the desired operation can be used to complete the task automatically. We have demonstrated the Smart Cruise Control system using real UAV videos where roads need to be tracked and buildings need to be monitored.

  16. Extraction of small boat harmonic signatures from passive sonar.

    PubMed

    Ogden, George L; Zurk, Lisa M; Jones, Mark E; Peterson, Mary E

    2011-06-01

    This paper investigates the extraction of acoustic signatures from small boats using a passive sonar system. Noise radiated from a small boats consists of broadband noise and harmonically related tones that correspond to engine and propeller specifications. A signal processing method to automatically extract the harmonic structure of noise radiated from small boats is developed. The Harmonic Extraction and Analysis Tool (HEAT) estimates the instantaneous fundamental frequency of the harmonic tones, refines the fundamental frequency estimate using a Kalman filter, and automatically extracts the amplitudes of the harmonic tonals to generate a harmonic signature for the boat. Results are presented that show the HEAT algorithms ability to extract these signatures. © 2011 Acoustical Society of America

  17. Analysis of the ancient river system in Loulan period in Lop Nur region

    NASA Astrophysics Data System (ADS)

    Zhu, Jianfeng; Jia, Peng; Nie, Yueping

    2010-09-01

    The Lop Nur region is located in the east of the Tarim Basin. It has served as the strategic passage and communication hub of the Silk Road since Han Dynasty. During Wei-Jin period, the river system there was well developed and the ancient city of Loulan was bred there. In this study, GIS is used to accomplish automatic extraction of the river course in the Lop Nur region at first using ArcGIS. Then the RCI index is constituted to extract ancient river course from Landsat ETM image with band 3 and band 4. It is concluded that the north river course of Peacock River conformed before the end of the 4th century AD according to the distribution of the entire river course of the Lop Nur region. Later, the Peacock River changed its way to south to Tarim River, and flowed into Lop Nur along the direction paralleling Altun Mountain from west to east. It was the change of the river system that mainly caused the decrease in water supply around ancient city of Loulan before the end of 4th century. The ancient city of Loulan has been gradually ruined in the sand because of the absence of water supply since then.

  18. VIEW OF FOSSIL CREEK DIVERSION DAM FROM DOWNSTREAM (INCLUDES 1950s ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF FOSSIL CREEK DIVERSION DAM FROM DOWNSTREAM (INCLUDES 1950s AUTOMATIC/REMOTE CONTROL SLUICE GATE IN UPPER CENTER OF DAM, NORTH SIDE). LOOKING NORTH-NORTHWEST - Childs-Irving Hydroelectric Project, Fossil Creek Diversion Dam, Forest Service Road 708/502, Camp Verde, Yavapai County, AZ

  19. Smart FRP Composite Sandwich Bridge Decks in Cold Regions

    DOT National Transportation Integrated Search

    2011-07-01

    What if every time a bridge on a lonely road got icy, it automatically notified the local DOT to begin ice-control safety measures? What if a bridge could tell someone : every time an overloaded truck hit the decking, or when the trusses under it beg...

  20. Methods for automatically analyzing humpback song units.

    PubMed

    Rickwood, Peter; Taylor, Andrew

    2008-03-01

    This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.

  1. Complex solution of problem of all-season construction of roads and pipelines on universal composite pontoon units

    NASA Astrophysics Data System (ADS)

    Ryabkov, A. V.; Stafeeva, N. A.; Ivanov, V. A.; Zakuraev, A. F.

    2018-05-01

    A complex construction consisting of a universal floating pontoon road for laying pipelines in automatic mode on its body all year round and in any weather for Siberia and the Far North has been designed. A new method is proposed for the construction of pipelines on pontoon modules, which are made of composite materials. Pontoons made of composite materials for bedding pipelines with track-forming guides for automated wheeled transport, pipelayer, are designed. The proposed system eliminates the construction of a road along the route, ensures the buoyancy and smoothness of the self-propelled automated stacker in the form of a "centipede", which has a number of significant advantages in the construction and operation of the entire complex in the swamp and watered areas without overburden.

  2. An Information Retrieval Approach for Robust Prediction of Road Surface States.

    PubMed

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-28

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.

  3. An Information Retrieval Approach for Robust Prediction of Road Surface States

    PubMed Central

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-01

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859

  4. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  5. THE APPLICATION OF ENGLISH-WORD MORPHOLOGY TO AUTOMATIC INDEXING AND EXTRACTING. ANNUAL SUMMARY REPORT.

    ERIC Educational Resources Information Center

    DOLBY, J.L.; AND OTHERS

    THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…

  6. A new artefacts resistant method for automatic lineament extraction using Multi-Hillshade Hierarchic Clustering (MHHC)

    NASA Astrophysics Data System (ADS)

    Šilhavý, Jakub; Minár, Jozef; Mentlík, Pavel; Sládek, Ján

    2016-07-01

    This paper presents a new method of automatic lineament extraction which includes the removal of the 'artefacts effect' which is associated with the process of raster based analysis. The core of the proposed Multi-Hillshade Hierarchic Clustering (MHHC) method incorporates a set of variously illuminated and rotated hillshades in combination with hierarchic clustering of derived 'protolineaments'. The algorithm also includes classification into positive and negative lineaments. MHHC was tested in two different territories in Bohemian Forest and Central Western Carpathians. The original vector-based algorithm was developed for comparison of the individual lineaments proximity. Its use confirms the compatibility of manual and automatic extraction and their similar relationships to structural data in the study areas.

  7. 2D automatic body-fitted structured mesh generation using advancing extraction method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  8. Three-Dimensional Road Network by Fusion of Polarimetric and Interferometric SAR Data

    NASA Technical Reports Server (NTRS)

    Gamba, P.; Houshmand, B.

    1998-01-01

    In this paper a fuzzy classification procedure is applied to polarimetric radar measurements, and street pixels are detected. These data are successively grouped into consistent roads by means of a dynamic programming approach based on the fuzzy membership function values. Further fusion of the 2D road network extracted and 3D TOPSAR measurements provides a powerful way to analyze urban infrastructures.

  9. A decision algorithm for determining safe clearing limits for the construction of skid roads

    Treesearch

    Chris LeDoux

    2006-01-01

    The majority of the timber harvested in the United States is extracted by ground-based skidders and crawler/dozer systems. Ground-based systems generally require a primary transportation network (a network of skid trails/roads) throughout the area being harvested. Logs are skidded or dragged along these skid roads/trails as they are transported from where they were cut...

  10. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  11. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians' Patient Care Decisions.

    PubMed

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.

  12. Proceedings of the European ISTVS Conference (6th), OVK Symposium (4th), on "Off Road Vehicles in Theory and Practice", Held at Vienna, Austria on 28-30 September 1994. Appendix.

    DTIC Science & Technology

    1994-09-30

    Experimentalfahrzeug 8x DipL-fng. W. Slinkell Mercedes - Benz AG Stuttgart, Deutschland 3 Zumn Nachweis hoher Mobilittit wurde 1986 ein Experimentalfahrzeug Mx mit...propulsed by a 6 cylinder turbo charged Diesel engine from the passenger car series by Mercedes Benz and which is also equipped with an automatic...transmission by Mercedes Benz reaches 52 kilometers per hour when being used on the road and 4 km/h when swimming through water. II I i i DESIGN FEATURES I

  13. Albemarle County road orders, 1783-1816.

    DOT National Transportation Integrated Search

    1975-01-01

    During the early stages of the pilot study of Albemarle County it was necessary to examine and extract all the road orders for the counties from which Albemarle was formed, as well as the orders for Albemarle when it still contained the counties of A...

  14. Automatic detection of adverse events to predict drug label changes using text and data mining techniques.

    PubMed

    Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki

    2013-11-01

    The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Automatic updating and 3D modeling of airport information from high resolution images using GIS and LIDAR data

    NASA Astrophysics Data System (ADS)

    Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng

    2007-11-01

    As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.

  16. Literature mining of protein-residue associations with graph rules learned through distant supervision.

    PubMed

    Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin

    2012-10-05

    We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.

  17. Road boundary detection

    NASA Technical Reports Server (NTRS)

    Sowers, J.; Mehrotra, R.; Sethi, I. K.

    1989-01-01

    A method for extracting road boundaries using the monochrome image of a visual road scene is presented. The statistical information regarding the intensity levels present in the image along with some geometrical constraints concerning the road are the basics of this approach. Results and advantages of this technique compared to others are discussed. The major advantages of this technique, when compared to others, are its ability to process the image in only one pass, to limit the area searched in the image using only knowledge concerning the road geometry and previous boundary information, and dynamically adjust for inconsistencies in the located boundary information, all of which helps to increase the efficacy of this technique.

  18. 18 CFR 415.33 - Uses by special permit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... transient enterprises. (3) Drive-in theaters, signs and billboards. (4) Extraction of sand, gravel and other...) Utilities, railroad tracks, streets and bridges. Public utility facilities, roads, railroad tracks and... of protection may be provided for minor or auxiliary roads, railroads or utilities. (5) Water supply...

  19. 18 CFR 415.33 - Uses by special permit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... transient enterprises. (3) Drive-in theaters, signs and billboards. (4) Extraction of sand, gravel and other...) Utilities, railroad tracks, streets and bridges. Public utility facilities, roads, railroad tracks and... of protection may be provided for minor or auxiliary roads, railroads or utilities. (5) Water supply...

  20. 18 CFR 415.33 - Uses by special permit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... transient enterprises. (3) Drive-in theaters, signs and billboards. (4) Extraction of sand, gravel and other...) Utilities, railroad tracks, streets and bridges. Public utility facilities, roads, railroad tracks and... of protection may be provided for minor or auxiliary roads, railroads or utilities. (5) Water supply...

  1. 18 CFR 415.33 - Uses by special permit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... transient enterprises. (3) Drive-in theaters, signs and billboards. (4) Extraction of sand, gravel and other...) Utilities, railroad tracks, streets and bridges. Public utility facilities, roads, railroad tracks and... of protection may be provided for minor or auxiliary roads, railroads or utilities. (5) Water supply...

  2. 18 CFR 415.33 - Uses by special permit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... transient enterprises. (3) Drive-in theaters, signs and billboards. (4) Extraction of sand, gravel and other...) Utilities, railroad tracks, streets and bridges. Public utility facilities, roads, railroad tracks and... of protection may be provided for minor or auxiliary roads, railroads or utilities. (5) Water supply...

  3. Manhole Cover Detection Using Vehicle-Based Multi-Sensor Data

    NASA Astrophysics Data System (ADS)

    Ji, S.; Shi, Y.; Shi, Z.

    2012-07-01

    A new method combined wit multi-view matching and feature extraction technique is developed to detect manhole covers on the streets using close-range images combined with GPS/IMU and LINDAR data. The covers are an important target on the road traffic as same as transport signs, traffic lights and zebra crossing but with more unified shapes. However, the different shoot angle and distance, ground material, complex street scene especially its shadow, and cars in the road have a great impact on the cover detection rate. The paper introduces a new method in edge detection and feature extraction in order to overcome these difficulties and greatly improve the detection rate. The LIDAR data are used to do scene segmentation and the street scene and cars are excluded from the roads. And edge detection method base on canny which sensitive to arcs and ellipses is applied on the segmented road scene and the interesting areas contain arcs are extracted and fitted to ellipse. The ellipse are then resampled for invariance to shooting angle and distance and then are matched to adjacent images for further checking if covers and . More than 1000 images with different scenes are used in our tests and the detection rate is analyzed. The results verified our method have its advantages in correct covers detection in the complex street scene.

  4. 46 CFR 161.002-2 - Types of fire-protective systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., but not be limited to, automatic fire and smoke detecting systems, manual fire alarm systems, sample extraction smoke detection systems, watchman's supervisory systems, and combinations of these systems. (b) Automatic fire detecting systems. For the purpose of this subpart, automatic fire and smoke detecting...

  5. 46 CFR 161.002-2 - Types of fire-protective systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., but not be limited to, automatic fire and smoke detecting systems, manual fire alarm systems, sample extraction smoke detection systems, watchman's supervisory systems, and combinations of these systems. (b) Automatic fire detecting systems. For the purpose of this subpart, automatic fire and smoke detecting...

  6. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun

    2017-04-01

    In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.

  7. High Resolution Trichromatic Road Surface Scanning with a Line Scan Camera and Light Emitting Diode Lighting for Road-Kill Detection.

    PubMed

    Lopes, Gil; Ribeiro, A Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo

    2016-04-19

    This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills-Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels' image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians.

  8. Innovation as Road Safety Felicitator

    NASA Astrophysics Data System (ADS)

    Sahoo, S.; Mitra, A.; Kumar, J.; Sahoo, B.

    2018-03-01

    Transportation via Roads should only be used for safely commuting from one place to another. In 2015, when 1.5 Million people, across the Globe started out on a journey, it was meant to be their last. The Global Status Report on Road Safety, 2015, reflected this data from 180 countries as road traffic deaths, worldwide. In India, more than 1.37 Lakh[4] people were victims of road accidents in 2013 alone. That number is more than the number of Indians killed in all the wars put together. With these disturbing facts in mind, we found out some key ambiguities in the Indian Road Traffic Management systems like the non-adaptive nature to fluctuating traffic, pedestrians and motor vehicles not adhering to the traffic norms strictly, to name a few. Introduction of simple systems would greatly erase the effects of this silent epidemic and our Project aims to achieve the same. It would introduce a pair of Barricade systems to cautiously separate the pedestrians and motor vehicles to minimise road mishaps to the extent possible. Exceptional situations like that of an Ambulance or any emergency vehicles will be taken care off by the use of RFID tags to monitor the movement of the Barricades. The varied traffic scenario can be guided properly by using the ADS-B (Automatic Detection System-Broadcast) for monitoring traffic density according to the time and place.

  9. High Resolution Trichromatic Road Surface Scanning with a Line Scan Camera and Light Emitting Diode Lighting for Road-Kill Detection

    PubMed Central

    Lopes, Gil; Ribeiro, A. Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo

    2016-01-01

    This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills—Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels’ image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians. PMID:27104535

  10. Evaluation of Driver Visibility from Mobile LIDAR Data and Weather Conditions

    NASA Astrophysics Data System (ADS)

    González-Jorge, H.; Díaz-Vilariño, L.; Lorenzo, H.; Arias, P.

    2016-06-01

    Visibility of drivers is crucial to ensure road safety. Visibility is influenced by two main factors, the geometry of the road and the weather present therein. The present work depicts an approach for automatic visibility evaluation using mobile LiDAR data and climate information provided from weather stations located in the neighbourhood of the road. The methodology is based on a ray-tracing algorithm to detect occlusions from point clouds with the purpose of identifying the visibility area from each driver position. The resulting data are normalized with the climate information to provide a polyline with an accurate area of visibility. Visibility ranges from 25 m (heavy fog) to more than 10,000 m (clean atmosphere). Values over 250 m are not taken into account for road safety purposes, since this value corresponds to the maximum braking distance of a vehicle. Two case studies are evaluated an urban road in the city of Vigo (Spain) and an inter-urban road between the city of Ourense and the village of Castro Caldelas (Spain). In both cases, data from the Galician Weather Agency (Meteogalicia) are used. The algorithm shows promising results allowing the detection of particularly dangerous areas from the viewpoint of driver visibility. The mountain road between Ourense and Castro Caldelas, with great presence of slopes and sharp curves, shows special interest for this type of application. In this case, poor visibility can especially contribute to the run over of pedestrians or cyclists traveling on the road shoulders.

  11. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  12. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    PubMed Central

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  13. Some advances/results in monitoring road cracks from 2D pavement images within the scope of the collaborative FP7 TRIMM project

    NASA Astrophysics Data System (ADS)

    Baltazart, Vincent; Moliard, Jean-Marc; Amhaz, Rabih; Wright, Dean; Jethwa, Manish

    2015-04-01

    Monitoring road surface conditions is an important issue in many countries. Several projects have looked into this issue in recent years, including TRIMM 2011-2014. The objective of such projects has been to detect surface distresses, like cracking, raveling and water ponding, in order to plan effective road maintenance and to afford a better sustainability of the pavement. The monitoring of cracking conventionally focuses on open cracks on the surface of the pavement, as opposed to reflexive cracks embedded in the pavement materials. For monitoring surface condition, in situ human visual inspection has been gradually replaced by automatic image data collection at traffic speed. Off-line image processing techniques have been developed for monitoring surface condition in support of human visual control. Full automation of crack monitoring has been approached with caution, and depends on a proper manual assessment of the performance. This work firstly presents some aspects of the current state of monitoring that have been reported so far in the literature and in previous projects: imaging technology and image processing techniques. Then, the work presents the two image processing techniques that have been developed within the scope of the TRIMM project to automatically detect pavement cracking from images. The first technique is a heuristic approach (HA) based on the search for gradient within the image. It was originally developed to process pavement images from the French imaging device, Aigle-RN. The second technique, the Minimal Path Selection (MPS) method, has been developed within an ongoing PhD work at IFSTTAR. The proposed new technique provides a fine and accurate segmentation of the crack pattern along with the estimation of the crack width. HA has been assessed against the field data collection provided by Yotta and TRL with the imaging device Tempest 2. The performance assessment has been threefold: first it was performed against the reference data set including 130 km of pavement images over UK roads, second over a few selected short sections of contiguous pavement images, and finally over a few sample images as a case study. The performance of MPS has been assessed against an older image data base. Pixel-based PGT was available to provide the most sensitive performance assessment. MPS has shown its ability to provide a very accurate cracking pattern without reducing the image resolution on the segmented images. Thus, it allows measurement of the crack width; it is found to behave more robustly against the image texture and better matched for dealing with low contrast pavement images. The benchmarking of seven automatic segmentation techniques has been provided at both the pixel and the grid levels. The performance assessment includes three minimal path selection algorithms, namely MPS, Free Form Anisotropy (FFA), one geodesic contour with automatic selection of points of interests (GC-POI), HA, and two Markov-based methods. Among others, MPS approach reached the best performance at the pixel level while it is matched to the FFA approach at the grid level. Finally, the project has emphasized the need for a reliable ground truth data collection. Owing to its accuracy, MPS may serve as a reference benchmark for other methods to provide the automatic segmentation of pavement images at the pixel level and beyond. As a counterpart, MPS requires a reduction in the computing time. Keywords: cracking, automatic segmentation, image processing, pavement, surface distress, monitoring, DICE, performance

  14. Using high resolution DEMs to assess the effects of roads and trails on hydrological pathways that contribute to gully development

    NASA Astrophysics Data System (ADS)

    Sidle, R. C.; Jarihani, B.

    2017-12-01

    Dry savannas of northern Queensland, Australia experience severe gully erosion, particularly areas that have been heavily grazed. Field surveys have also noted the influence of unpaved roads and cattle trails on concentrating storm runoff into gully systems. To better quantify the effect of these roads and trails we use high resolution digital elevation models (DEMs) to develop indices of hydrological connectivity (IC) throughout drainage areas above and downstream of gully systems. High resolution (0.5m) DEMs from LiDAR were used to extract road and trail networks and drone-based very high resolution (0.1m) DEMs were used to extract cattle trials. IC is a function of the ratio of upslope to downslope sediment routing functions, which are based on upslope area, mean slope gradient, a weighting factor related to impedance to overland flow, and flow path distance (for the downstream function). Maps of IC within the heavily grazed Weany Creek catchment (13 km2) of northeast Queensland show that existing roads can increase hydrologic connectivity to gully systems. Furthermore, by adding roads and cattle trails into existing DEMs, we show how the extent and location of these curvilinear features affect overland flow concentration. Our findings can inform important hydrogeomorphic issues such as which gullies will likely headcut or expand and where new gullies may arise. Our analysis can also contribute to better management practices for grazing and road location.

  15. Computed Tomography-Based Biomarker for Longitudinal Assessment of Disease Burden in Pulmonary Tuberculosis.

    PubMed

    Gordaliza, P M; Muñoz-Barrutia, A; Via, L E; Sharpe, S; Desco, M; Vaquero, J J

    2018-05-29

    Computed tomography (CT) images enable capturing specific manifestations of tuberculosis (TB) that are undetectable using common diagnostic tests, which suffer from limited specificity. In this study, we aimed to automatically quantify the burden of Mycobacterium tuberculosis (Mtb) using biomarkers extracted from x-ray CT images. Nine macaques were aerosol-infected with Mtb and treated with various antibiotic cocktails. Chest CT scans were acquired in all animals at specific times independently of disease progression. First, a fully automatic segmentation of the healthy lungs from the acquired chest CT volumes was performed and air-like structures were extracted. Next, unsegmented pulmonary regions corresponding to damaged parenchymal tissue and TB lesions were included. CT biomarkers were extracted by classification of the probability distribution of the intensity of the segmented images into three tissue types: (1) Healthy tissue, parenchyma free from infection; (2) soft diseased tissue, and (3) hard diseased tissue. The probability distribution of tissue intensities was assumed to follow a Gaussian mixture model. The thresholds identifying each region were automatically computed using an expectation-maximization algorithm. The estimated longitudinal course of TB infection shows that subjects that have followed the same antibiotic treatment present a similar response (relative change in the diseased volume) with respect to baseline. More interestingly, the correlation between the diseased volume (soft tissue + hard tissue), which was manually delineated by an expert, and the automatically extracted volume with the proposed method was very strong (R 2  ≈ 0.8). We present a methodology that is suitable for automatic extraction of a radiological biomarker from CT images for TB disease burden. The method could be used to describe the longitudinal evolution of Mtb infection in a clinical trial devoted to the design of new drugs.

  16. A new approach for automatic matching of ground control points in urban areas from heterogeneous images

    NASA Astrophysics Data System (ADS)

    Cong, Chao; Liu, Dingsheng; Zhao, Lingjun

    2008-12-01

    This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.

  17. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  18. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians’ Patient Care Decisions

    PubMed Central

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians. PMID:28269867

  19. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  20. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  1. Comparative Evaluation of Pavement Crack Detection Using Kernel-Based Techniques in Asphalt Road Surfaces

    NASA Astrophysics Data System (ADS)

    Miraliakbari, A.; Sok, S.; Ouma, Y. O.; Hahn, M.

    2016-06-01

    With the increasing demand for the digital survey and acquisition of road pavement conditions, there is also the parallel growing need for the development of automated techniques for the analysis and evaluation of the actual road conditions. This is due in part to the resulting large volumes of road pavement data captured through digital surveys, and also to the requirements for rapid data processing and evaluations. In this study, the Canon 5D Mark II RGB camera with a resolution of 21 megapixels is used for the road pavement condition mapping. Even though many imaging and mapping sensors are available, the development of automated pavement distress detection, recognition and extraction systems for pavement condition is still a challenge. In order to detect and extract pavement cracks, a comparative evaluation of kernel-based segmentation methods comprising line filtering (LF), local binary pattern (LBP) and high-pass filtering (HPF) is carried out. While the LF and LBP methods are based on the principle of rotation-invariance for pattern matching, the HPF applies the same principle for filtering, but with a rotational invariant matrix. With respect to the processing speeds, HPF is fastest due to the fact that it is based on a single kernel, as compared to LF and LBP which are based on several kernels. Experiments with 20 sample images which contain linear, block and alligator cracks are carried out. On an average a completeness of distress extraction with values of 81.2%, 76.2% and 81.1% have been found for LF, HPF and LBP respectively.

  2. Automatic segmentation of the bone and extraction of the bone cartilage interface from magnetic resonance images of the knee

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2007-03-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  3. Exact extraction method for road rutting laser lines

    NASA Astrophysics Data System (ADS)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  4. Automatic emotional expression analysis from eye area

    NASA Astrophysics Data System (ADS)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  5. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  6. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Höynck, Michael

    2004-12-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  7. Comparison of Landsat-8, ASTER and Sentinel 1 satellite remote sensing data in automatic lineaments extraction: A case study of Sidi Flah-Bouskour inlier, Moroccan Anti Atlas

    NASA Astrophysics Data System (ADS)

    Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa

    2017-12-01

    Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.

  8. Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei

    2013-03-01

    An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  9. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  10. A Robust Concurrent Approach for Road Extraction and Urbanization Monitoring Based on Superpixels Acquired from Spectral Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Seppke, Benjamin; Dreschler-Fischer, Leonie; Wilms, Christian

    2016-08-01

    The extraction of road signatures from remote sensing images as a promising indicator for urbanization is a classical segmentation problem. However, some segmentation algorithms often lead to non-sufficient results. One way to overcome this problem is the usage of superpixels, that represent a locally coherent cluster of connected pixels. Superpixels allow flexible, highly adaptive segmentation approaches due to the possibility of merging as well as splitting and form new basic image entities. On the other hand, superpixels require an appropriate representation containing all relevant information about topology and geometry to maximize their advantages.In this work, we present a combined geometric and topological representation based on a special graph representation, the so-called RS-graph. Moreover, we present the use of the RS-graph by means of a case study: the extraction of partially occluded road networks in rural areas from open source (spectral) remote sensing images by tracking. In addition, multiprocessing and GPU-based parallelization is used to speed up the construction of the representation and the application.

  11. Count Me In! on the Automaticity of Numerosity Processing

    ERIC Educational Resources Information Center

    Naparstek, Sharon; Henik, Avishai

    2010-01-01

    Extraction of numerosity (i.e., enumeration) is an essential component of mathematical abilities. The current study asked how automatic is the processing of numerosity and whether automatic activation is task dependent. Participants were presented with displays containing a variable number of digits and were asked to pay attention to the number of…

  12. Automatic Authorship Detection Using Textual Patterns Extracted from Integrated Syntactic Graphs

    PubMed Central

    Gómez-Adorno, Helena; Sidorov, Grigori; Pinto, David; Vilariño, Darnes; Gelbukh, Alexander

    2016-01-01

    We apply the integrated syntactic graph feature extraction methodology to the task of automatic authorship detection. This graph-based representation allows integrating different levels of language description into a single structure. We extract textual patterns based on features obtained from shortest path walks over integrated syntactic graphs and apply them to determine the authors of documents. On average, our method outperforms the state of the art approaches and gives consistently high results across different corpora, unlike existing methods. Our results show that our textual patterns are useful for the task of authorship attribution. PMID:27589740

  13. Using expansive grasses for monitoring heavy metal pollution in the vicinity of roads.

    PubMed

    Vachová, Pavla; Vach, Marek; Najnarová, Eva

    2017-10-01

    We propose a method for monitoring heavy metal deposition in the vicinity of roads using the leaf surfaces of two expansive grass species which are greatly abundant. A principle of the proposed procedure is to minimize the number of operations in collecting and preparing samples for analysis. The monitored elements are extracted from the leaf surfaces using dilute nitric acid directly in the sample-collection bottle. The ensuing steps, then, are only to filter the extraction solution and the elemental analysis itself. The verification results indicate that the selected grasses Calamagrostis epigejos and Arrhenatherum elatius are well suited to the proposed procedure. Selected heavy metals (Zn, Cu, Pb, Ni, Cr, and Cd) in concentrations appropriate for direct determination using methods of elemental analysis can be extracted from the surface of leaves of these species collected in the vicinity of roads with medium traffic loads. Comparing the two species showed that each had a different relationship between the amounts of deposited heavy metals and distance from the road. This disparity can be explained by specific morphological properties of the two species' leaf surfaces. Due to the abundant occurrence of the two species and the method's general simplicity and ready availability, we regard the proposed approach to constitute a broadly usable and repeatable one for producing reproducible results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. 76 FR 28505 - Okanogan Public Utility District No. 1 of Okanogan County, WA; Notice of Availability of Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ....5 miles of new and upgraded access roads. The Enloe Project would operate automatically in a run-of... run-of-river and implementing agency-recommended ramping rates downstream of the project during... effects on geology and soils and water quality. Run-of-river operation would minimize effects on aquatic...

  15. Using Activity-Related Behavioural Features towards More Effective Automatic Stress Detection

    PubMed Central

    Giakoumis, Dimitris; Drosou, Anastasios; Cipresso, Pietro; Tzovaras, Dimitrios; Hassapis, George; Gaggioli, Andrea; Riva, Giuseppe

    2012-01-01

    This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response) recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images) that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing. PMID:23028461

  16. Segmentation and classification of road markings using MLS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2017-01-01

    Traffic signs are one of the most important safety elements in a road network. Particularly, road markings provide information about the limits and direction of each road lane, or warn the drivers about potential danger. The optimal condition of road markings contributes to a better road safety. Mobile Laser Scanning technology can be used for infrastructure inspection and specifically for traffic sign detection and inventory. This paper presents a methodology for the detection and semantic characterization of the most common road markings, namely pedestrian crossings and arrows. The 3D point cloud data acquired by a LYNX Mobile Mapper system is filtered in order to isolate reflective points in the road, and each single element is hierarchically classified using Neural Networks. State of the art results are obtained for the extraction and classification of the markings, with F-scores of 94% and 96% respectively. Finally, data from classified markings are exported to a GIS layer and maintenance criteria based on the aforementioned data are proposed.

  17. Rapid, Potentially Automatable, Method Extract Biomarkers for HPLC/ESI/MS/MS to Detect and Identify BW Agents

    DTIC Science & Technology

    1997-11-01

    status can sometimes be reflected in the infectious potential or drug resistance of those pathogens. For example, in Mycobacterium tuberculosis ... Mycobacterium tuberculosis , its antibiotic resistance and prediction of pathogenicity amongst Mycobacterium spp. based on signature lipid biomarkers ...TITLE AND SUBTITLE Rapid, Potentially Automatable, Method Extract Biomarkers for HPLC/ESI/MS/MS to Detect and Identify BW Agents 5a. CONTRACT NUMBER 5b

  18. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  19. NGEE Arctic Plant Traits: Soil Nutrient Availability, Kougarok Road Mile Marker 64, Seward Peninsula, Alaska, beginning 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verity Salmon; Colleen Iversen; Amy Breen

    Soil nutrient availability at all vegetation plots was measured using anion and cation binding resins deployed to vegetation plots at the Kougarok hillslope site located at Kougarok Road Marker 64. Concentrations of ammonia, nitrate, and phosphate in resin extract solutions were determined in the lab.

  20. 38. DETAIL OF RUINS OF CYANIDE MIXING AND EXTRACTION SHED, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    38. DETAIL OF RUINS OF CYANIDE MIXING AND EXTRACTION SHED, LOOKING SOUTHEAST. CYANIDE SOLUTION WAS PREPARED HERE AND PUMPED UP INTO THE PROCESSING TANKS, AND THE PREGNANT SOLUTION WAS ALSO EXTRACTED HERE AFTER THE LEACHING PROCESS WAS COMPLETE - Skidoo Mine, Park Route 38 (Skidoo Road), Death Valley Junction, Inyo County, CA

  1. Finding topological center of a geographic space via road network

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Miao, Yanan; Qin, Yuhao; Zhao, Xiaomei; Gao, Zi-You

    2015-02-01

    Previous studies show that the center of a geographic space is of great importance in urban and regional studies, including study of population distribution, urban growth modeling, and scaling properties of urban systems, etc. But how to well define and how to efficiently extract the center of a geographic space are still largely unknown. Recently, Jiang et al. have presented a definition of topological center by their block detection (BD) algorithm. Despite the fact that they first introduced the definition and discovered the 'true center', in human minds, their algorithm left several redundancies in its traversal process. Here, we propose an alternative road-cycle detection (RCD) algorithm to find the topological center, which extracts the outmost road-cycle recursively. To foster the application of the topological center in related research fields, we first reproduce the BD algorithm in Python (pyBD), then implement the RCD algorithm in two ways: the ArcPy implementation (arcRCD) and the Python implementation (pyRCD). After the experiments on twenty-four typical road networks, we find that the results of our RCD algorithm are consistent with those of Jiang's BD algorithm. We also find that the RCD algorithm is at least seven times more efficient than the BD algorithm on all the ten typical road networks.

  2. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  3. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    PubMed

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A study of the landslide potential along the mountain road using environmental indices

    NASA Astrophysics Data System (ADS)

    Lin, C. Y.

    2014-12-01

    Utilization of slope land in recent years is rapid as a result of the dense population and limit of land resources in Taiwan. Therefore, mountain road plays an essential role for the necessity of human life. However, landslide disaster resulting in road failure occurred frequently in Taiwan on the slope land due to earthquake and typhoon. Previous studies found that the extreme rainfall coupled with the property of fragile geology could cause landslide. Nevertheless, the landslide occurrence might be affected by the drainage of the road side ditches. Taiwan Highway No.21 in Chi-Shan watershed and the forest roads located in Xiao-Lin Village, which failure during the hit of Typhoon Morakot in 2009, were selected for exploring the potential of vulnerable to landslides. Topographic Wetness Index (TWI) and Road Curvature (RC) were extracted along the road to indicate the potential sites which are vulnerable to slope failure. The surface runoff diverted by the road side ditches could spoil the sites with high RC due to the straight movement characteristics of the diverted runoff and cause the downslope collapse. The sites with higher mean value and lower standard deviation of Normalized Difference Vegetation Index (NDVI) derived from the SPOT imagery taken in dry and/or rainy seasons could be implied as the vegetation stands showing highly buffer effects in environmental stress due to having deeper soil layer, and are hardly interfered by the drought. The stands located in such sites once collapsed are often resulting in huge volumes of debris. Drainage Density (DD) index could be applied as the degrees of geologic fragile in the slope land. A road across the sites with higher mean value and lower standard deviation of NDVI and/or higher DD should be paid more attention because of having highly vulnerable to deep seated landslide. This study is focusing on extracting and analyzing the environmental indices such as TWI, RC, NDVI and DD for exploring the slope stability along the mountain road. The results could be used as the references of related authorities for understanding the potential landslides along a road.

  5. A data reduction package for multiple object spectroscopy

    NASA Technical Reports Server (NTRS)

    Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.

    1986-01-01

    Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.

  6. Automatic Extraction of JPF Options and Documentation

    NASA Technical Reports Server (NTRS)

    Luks, Wojciech; Tkachuk, Oksana; Buschnell, David

    2011-01-01

    Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.

  7. Automatic differential analysis of NMR experiments in complex samples.

    PubMed

    Margueritte, Laure; Markov, Petar; Chiron, Lionel; Starck, Jean-Philippe; Vonthron-Sénécheau, Catherine; Bourjot, Mélanie; Delsuc, Marc-André

    2018-06-01

    Liquid state nuclear magnetic resonance (NMR) is a powerful tool for the analysis of complex mixtures of unknown molecules. This capacity has been used in many analytical approaches: metabolomics, identification of active compounds in natural extracts, and characterization of species, and such studies require the acquisition of many diverse NMR measurements on series of samples. Although acquisition can easily be performed automatically, the number of NMR experiments involved in these studies increases very rapidly, and this data avalanche requires to resort to automatic processing and analysis. We present here a program that allows the autonomous, unsupervised processing of a large corpus of 1D, 2D, and diffusion-ordered spectroscopy experiments from a series of samples acquired in different conditions. The program provides all the signal processing steps, as well as peak-picking and bucketing of 1D and 2D spectra, the program and its components are fully available. In an experiment mimicking the search of a bioactive species in a natural extract, we use it for the automatic detection of small amounts of artemisinin added to a series of plant extracts and for the generation of the spectral fingerprint of this molecule. This program called Plasmodesma is a novel tool that should be useful to decipher complex mixtures, particularly in the discovery of biologically active natural products from plants extracts but can also in drug discovery or metabolomics studies. Copyright © 2017 John Wiley & Sons, Ltd.

  8. New auto-segment method of cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Wang, Weijiang; Shen, Tingzhi; Dang, Hua

    2007-12-01

    A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.

  9. Automatic detection of typical dust devils from Mars landscape images

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Watanabe, Takeru; Okumura, Susumu; Hatanaka, Yuji

    2018-02-01

    This paper presents an improved algorithm for automatic detection of Martian dust devils that successfully extracts tiny bright dust devils and obscured large dust devils from two subtracted landscape images. These dust devils are frequently observed using visible cameras onboard landers or rovers. Nevertheless, previous research on automated detection of dust devils has not focused on these common types of dust devils, but on dust devils that appear on images to be irregularly bright and large. In this study, we detect these common dust devils automatically using two kinds of parameter sets for thresholding when binarizing subtracted images. We automatically extract dust devils from 266 images taken by the Spirit rover to evaluate our algorithm. Taking dust devils detected by visual inspection to be ground truth, the precision, recall and F-measure values are 0.77, 0.86, and 0.81, respectively.

  10. Clinical Assistant Diagnosis for Electronic Medical Record Based on Convolutional Neural Network.

    PubMed

    Yang, Zhongliang; Huang, Yongfeng; Jiang, Yiran; Sun, Yuxi; Zhang, Yu-Jin; Luo, Pengcheng

    2018-04-20

    Automatically extracting useful information from electronic medical records along with conducting disease diagnoses is a promising task for both clinical decision support(CDS) and neural language processing(NLP). Most of the existing systems are based on artificially constructed knowledge bases, and then auxiliary diagnosis is done by rule matching. In this study, we present a clinical intelligent decision approach based on Convolutional Neural Networks(CNN), which can automatically extract high-level semantic information of electronic medical records and then perform automatic diagnosis without artificial construction of rules or knowledge bases. We use collected 18,590 copies of the real-world clinical electronic medical records to train and test the proposed model. Experimental results show that the proposed model can achieve 98.67% accuracy and 96.02% recall, which strongly supports that using convolutional neural network to automatically learn high-level semantic features of electronic medical records and then conduct assist diagnosis is feasible and effective.

  11. Rapid automatic keyword extraction for information retrieval and analysis

    DOEpatents

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  12. Integrating Information Extraction Agents into a Tourism Recommender System

    NASA Astrophysics Data System (ADS)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  13. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  14. Study on Stationarity of Random Load Spectrum Based on the Special Road

    NASA Astrophysics Data System (ADS)

    Yan, Huawen; Zhang, Weigong; Wang, Dong

    2017-09-01

    In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.

  15. Pole-Like Road Furniture Detection in Sparse and Unevenly Distributed Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Lehtomäki, M.; Oude Elberink, S.; Vosselman, G.; Puttonen, E.; Kukko, A.; Hyyppä, J.

    2018-05-01

    Pole-like road furniture detection received much attention due to its traffic functionality in recent years. In this paper, we develop a framework to detect pole-like road furniture from sparse mobile laser scanning data. The framework is carried out in four steps. The unorganised point cloud is first partitioned. Then above ground points are clustered and roughly classified after removing ground points. A slicing check in combination with cylinder masking is proposed to extract pole-like road furniture candidates. Pole-like road furniture are obtained after occlusion analysis in the last stage. The average completeness and correctness of pole-like road furniture in sparse and unevenly distributed mobile laser scanning data was above 0.83. It is comparable to the state of art in the field of pole-like road furniture detection in mobile laser scanning data of good quality and is potentially of practical use in the processing of point clouds collected by autonomous driving platforms.

  16. Computer-aided screening system for cervical precancerous cells based on field emission scanning electron microscopy and energy dispersive x-ray images and spectra

    NASA Astrophysics Data System (ADS)

    Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi

    2016-10-01

    The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.

  17. PAH determination based on a rapid and novel gas purge-microsyringe extraction (GP-MSE) technique in road dust of Shanghai, China: Characterization, source apportionment, and health risk assessment.

    PubMed

    Zheng, Xin; Yang, Yi; Liu, Min; Yu, Yingpeng; Zhou, John L; Li, Donghao

    2016-07-01

    A novel cleanup technique termed as gas purge-microsyringe extraction (GP-MSE) was evaluated and applied for polycyclic aromatic hydrocarbon (PAH) determination in road dust samples. A total of 68 road dust samples covering almost the entire Shanghai area were analyzed for 16 priority PAHs using gas chromatography-mass spectrometry. The results indicate that the total PAH concentrations over the investigated sites ranged from 1.04μg/g to 134.02μg/g dw with an average of 13.84μg/g. High-molecular-weight compounds (4-6 rings PAHs) were significantly dominant in the total mass of PAHs, and accounted for 77.85% to 93.62%. Diagnostic ratio analysis showed that the road dust PAHs were mainly from the mixture of petroleum and biomass/coal combustions. Principal component analysis in conjunction with multiple linear regression indicated that the two major origins of road dust PAHs were vehicular emissions and biomass/fossil fuel combustions, which contributed 66.7% and 18.8% to the total road dust PAH burden, respectively. The concentration of benzo[a]pyrene equivalent (BaPeq) varied from 0.16μg/g to 24.47μg/g. The six highly carcinogenic PAH species (benz(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, dibenz(a,h)anthracene, and indeno(1,2,3-cd)pyrene) accounted for 98.57% of the total BaPeq concentration. Thus, the toxicity of PAHs in road dust was highly associated with high-molecular-weight compounds. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Space Archaeology for military-agricultural colonies (tuntian) on the ancient Silk Road, NW China

    NASA Astrophysics Data System (ADS)

    Luo, Lei; Wang, Xinyuan; Guo, Huadong; Liu, Chuansheng

    2017-04-01

    The ancient Silk Road, a pioneering work in the history of human civilization, contributed greatly to the cultural exchange between China and the West. It is the precious cultural heritage should be shared by the whole humanity. Although there were countless archaeological sites along the ancient Silk Road, most of the existing researches just focused on the sites, lacking the overall understanding of the relationships between sites and their supporting environment. Space archaeology provides a new viewpoint for investigating, discovering, reconstructing and documenting the archaeological sites under different scales. The tuntian system was a state-promoted system of military-agricultural colonies, which originated in the Western Han dynasty (206 BC-9 AD). All the imperial dynasties in Chinese history adopted the practice of tuntian to cultivate and guard frontier areas as an important state policy for developing border areas and consolidating frontier defence. This study describes the use of Chinese GF-1 imagery, LS-7 ETM+ data and ASTER GDEMV2 products to uncover an ancient irrigated canal-based tuntian system located in Milan oasis adjacent to the ancient Kingdom of Loulan at the southern margin of the Tarim Basin. The GF-1 and LS-7 data were first processed following atmospheric and geometric correction and enhanced by Gram Schmidt pansharpening. The linear archaeological traces of tuntian irrigation canals were extracted from the morphologically enhanced GF-1 PAN imagery using our proposed automatic method which adopts mathematical morphological processing and Canny edge operator. Compared with the manual extractions, the overall detection accuracy was better than 90%. In addition, the functions of the trunk, primary, secondary and tertiary canals were each analyzed and the spatial extent of Milan's tuntian landscape were analyzed with the help of the NDVI derived from the GF-1 multispectral imagery. The effective irrigated tuntian area was estimated to be 2, 800 ha and the maximum irrigated tuntian area was found to be more than 8, 000 ha during the area's most prosperous period. The overall spatial pattern of Milan's tuntian landscape was explored using the patch-corridor-matrix model. The features and functions of tuntian landscape elements in Mountain-Oasis-Desert Ecosystem (MODES) were discussed in detail. By detailed analysis of satellite remote sensing data, this study reconstructed a 3D view of Milan's tuntian agricultural landscape in a GIS. Milan's tuntian system reveals the basic organization pattern of the ancient tuntian system in Xinjiang, and provides a solid foundation for understanding the military, cultural, economic and geopolitical values of ancient tuntian system for China frontiers.

  19. On the Deployment and Noise Filtering of Vehicular Radar Application for Detection Enhancement in Roads and Tunnels.

    PubMed

    Kim, Young-Duk; Son, Guk-Jin; Song, Chan-Ho; Kim, Hee-Kang

    2018-03-11

    Recently, radar technology has attracted attention for the realization of an intelligent transportation system (ITS) to monitor, track, and manage vehicle traffic on the roads as well as adaptive cruise control (ACC) and automatic emergency braking (AEB) for driving assistance of vehicles. However, when radar is installed on roads or in tunnels, the detection performance is significantly dependent on the deployment conditions and environment around the radar. In particular, in the case of tunnels, the detection accuracy for a moving vehicle drops sharply owing to the diffuse reflection of radio frequency (RF) signals. In this paper, we propose an optimal deployment condition based on height and tilt angle as well as a noise-filtering scheme for RF signals so that the performance of vehicle detection can be robust against external conditions on roads and in tunnels. To this end, first, we gather and analyze the misrecognition patterns of the radar by tracking a number of randomly selected vehicles on real roads. In order to overcome the limitations, we implement a novel road watch module (RWM) that is easily integrated into a conventional radar system such as Delphi ESR. The proposed system is able to perform real-time distributed data processing of the target vehicles by providing independent queues for each object of information that is incoming from the radar RF. Based on experiments with real roads and tunnels, the proposed scheme shows better performance than the conventional method with respect to the detection accuracy and delay time. The implemented system also provides a user-friendly interface to monitor and manage all traffic on roads and in tunnels. This will accelerate the popularization of future ITS services.

  20. On the Deployment and Noise Filtering of Vehicular Radar Application for Detection Enhancement in Roads and Tunnels

    PubMed Central

    Kim, Young-Duk; Son, Guk-Jin; Song, Chan-Ho

    2018-01-01

    Recently, radar technology has attracted attention for the realization of an intelligent transportation system (ITS) to monitor, track, and manage vehicle traffic on the roads as well as adaptive cruise control (ACC) and automatic emergency braking (AEB) for driving assistance of vehicles. However, when radar is installed on roads or in tunnels, the detection performance is significantly dependent on the deployment conditions and environment around the radar. In particular, in the case of tunnels, the detection accuracy for a moving vehicle drops sharply owing to the diffuse reflection of radio frequency (RF) signals. In this paper, we propose an optimal deployment condition based on height and tilt angle as well as a noise-filtering scheme for RF signals so that the performance of vehicle detection can be robust against external conditions on roads and in tunnels. To this end, first, we gather and analyze the misrecognition patterns of the radar by tracking a number of randomly selected vehicles on real roads. In order to overcome the limitations, we implement a novel road watch module (RWM) that is easily integrated into a conventional radar system such as Delphi ESR. The proposed system is able to perform real-time distributed data processing of the target vehicles by providing independent queues for each object of information that is incoming from the radar RF. Based on experiments with real roads and tunnels, the proposed scheme shows better performance than the conventional method with respect to the detection accuracy and delay time. The implemented system also provides a user-friendly interface to monitor and manage all traffic on roads and in tunnels. This will accelerate the popularization of future ITS services. PMID:29534483

  1. Morphological feature extraction for the classification of digital images of cancerous tissues.

    PubMed

    Thiran, J P; Macq, B

    1996-10-01

    This paper presents a new method for automatic recognition of cancerous tissues from an image of a microscopic section. Based on the shape and the size analysis of the observed cells, this method provides the physician with nonsubjective numerical values for four criteria of malignancy. This automatic approach is based on mathematical morphology, and more specifically on the use of Geodesy. This technique is used first to remove the background noise from the image and then to operate a segmentation of the nuclei of the cells and an analysis of their shape, their size, and their texture. From the values of the extracted criteria, an automatic classification of the image (cancerous or not) is finally operated.

  2. Automatic extraction of building boundaries using aerial LiDAR data

    NASA Astrophysics Data System (ADS)

    Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian

    2016-01-01

    Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.

  3. RUPOK - a web-map application for assessment of impacts of natural hazards on the transportation infrastructure

    NASA Astrophysics Data System (ADS)

    Bíl, Michal; Kubeček, Jan; Andrášik, Richard; Bílová, Martina; Sedoník, Jiří

    2016-04-01

    We present a web-map application (www.rupok.cz) designed for visualization of losses caused by natural hazards to the transportation infrastructure. This application is an output of a project in which we analyzed direct, indirect and network-wide impacts of major natural disasters which hit the CZ as of 1997. When natural disasters hit a road network the results are often a number of closed road sections. Certain roads may be, however, destroyed, whereas the majority of them are usually only closed and can be reopened after a short period of time. While the computation of direct losses (the cost of remedial works) is fairly simple, the evaluation of indirect and network-wide costs is much more difficult. We created a database of interrupted road and highway sections due to natural processes which includes data since 1997 and which is automatically updated. 6,828 records concerning interrupted communications located on 2,879 road sections are included in the database for the 1997 - 2014 time period. Flooding caused 37 % of the traffic interruptions, followed by fallen trees (22 %), landsliding (5 %) and rockfalls (2 %). The RUPOK webpage contains information on the probabilities of transportation section interruptions due to natural processes as well as the impacts of possible interruptions. The direct losses are depicted as monetary values per road section unit. The values are calculated on the basis of official tables including the prices for construction works. The indirect losses were calculated on the basis of the best alternative route expenses and as traffic intensities affected by a road section interruption.

  4. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  5. Automatic Short Essay Scoring Using Natural Language Processing to Extract Semantic Information in the Form of Propositions. CRESST Report 831

    ERIC Educational Resources Information Center

    Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.

    2013-01-01

    The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques…

  6. North End Runway Material Extraction and Transport Environmental Assessment

    DTIC Science & Technology

    2006-05-01

    commercial providers, non -commercial providers, or a combination thereof Material would be transported by public road, commercial rail, ami/or barge...from commercial providers, non -commercial providers, or a combination thereof Material would be transported by public road, commercial rail, ami/or...Terminal Redevelopment NEPA National Environmental Policy Act NFA No further action NFS Non -frost susceptible NHPA National Historic Preservation

  7. Bioavailability and biotransformation of the mutagenic component of particulate emissions present in motor exhaust samples.

    PubMed Central

    Vostal, J J

    1983-01-01

    The pharmacokinetic concepts of bioavailability and biotransformation are introduced into the assessment of public health risk from experimental data concerning the emissions of potentially mutagenic and carcinogenic substances from motor vehicles. The inappropriateness of an automatic application in the risk assessment process of analytical or experimental results, obtained with extracts and procedures incompatible with the biological environment, is illustrated on the discrepancy between short-term laboratory tests predictions that wider use of diesel engines on our roads will increase the risk of respiratory cancer and the widely negative epidemiological evidence. Mutagenic activity of diesel particulates was minimal or negative when tested in extracts obtained with biological fluids, was substantially dependent on the presence of nitroreductase in the microbial tester strain, and disappeared completely 48 hr after the diesel particles had been phagocytized by alveolar macrophages. Similarly, long-term animal inhalation exposures to high concentrations of diesel particles did not induce the activity of hydrocarbon metabolizing enzymes or specific adverse immune response unless organic solvent extracts of diesel particles were administered intratracheally or parenterally in doses that highly exceed the predicted levels of public exposure even by the year 2000. Furthermore, the suspected cancer producing effects of inhaled diesel particles have thus far not been verified by experimental animal models or available long-term epidemiological observations. It is concluded that unless the biological accessibility of the active component on the pollutant as well as its biotransformation and clearance by natural defense mechanisms are considered, lung cancer risk assessment based solely on laboratory microbial tests will remain an arbitrary and unrealistic process and will not provide meaningful information on the potential health hazard of a pollutant. PMID:6186478

  8. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  9. Realtime automatic metal extraction of medical x-ray images for contrast improvement

    NASA Astrophysics Data System (ADS)

    Prangl, Martin; Hellwagner, Hermann; Spielvogel, Christian; Bischof, Horst; Szkaliczki, Tibor

    2006-03-01

    This paper focuses on an approach for real-time metal extraction of x-ray images taken from modern x-ray machines like C-arms. Such machines are used for vessel diagnostics, surgical interventions, as well as cardiology, neurology and orthopedic examinations. They are very fast in taking images from different angles. For this reason, manual adjustment of contrast is infeasible and automatic adjustment algorithms have been applied to try to select the optimal radiation dose for contrast adjustment. Problems occur when metallic objects, e.g., a prosthesis or a screw, are in the absorption area of interest. In this case, the automatic adjustment mostly fails because the dark, metallic objects lead the algorithm to overdose the x-ray tube. This outshining effect results in overexposed images and bad contrast. To overcome this limitation, metallic objects have to be detected and extracted from images that are taken as input for the adjustment algorithm. In this paper, we present a real-time solution for extracting metallic objects of x-ray images. We will explore the characteristic features of metallic objects in x-ray images and their distinction from bone fragments which form the basis to find a successful way for object segmentation and classification. Subsequently, we will present our edge based real-time approach for successful and fast automatic segmentation and classification of metallic objects. Finally, experimental results on the effectiveness and performance of our approach based on a vast amount of input image data sets will be presented.

  10. Text feature extraction based on deep learning: a review.

    PubMed

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  11. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  12. Automatic Identification & Classification of Surgical Margin Status from Pathology Reports Following Prostate Cancer Surgery

    PubMed Central

    D’Avolio, Leonard W.; Litwin, Mark S.; Rogers, Selwyn O.; Bui, Alex A. T.

    2007-01-01

    Prostate cancer removal surgeries that result in tumor found at the surgical margin, otherwise known as a positive surgical margin, have a significantly higher chance of biochemical recurrence and clinical progression. To support clinical outcomes assessment a system was designed to automatically identify, extract, and classify key phrases from pathology reports describing this outcome. Heuristics and boundary detection were used to extract phrases. Phrases were then classified using support vector machines into one of three classes: ‘positive (involved) margins,’ ‘negative (uninvolved) margins,’ and ‘not-applicable or definitive.’ A total of 851 key phrases were extracted from a sample of 782 reports produced between 1996 and 2006 from two major hospitals. Despite differences in reporting style, at least 1 sentence containing a diagnosis was extracted from 780 of the 782 reports (99.74%). Of the 851 sentences extracted, 97.3% contained diagnoses. Overall accuracy of automated classification of extracted sentences into the three categories was 97.18%. PMID:18693818

  13. The development of an automatically produced cholangiography procedure using the reconstruction of portal-phase multidetector-row computed tomography images: preliminary experience.

    PubMed

    Hirose, Tomoaki; Igami, Tsuyoshi; Koga, Kusuto; Hayashi, Yuichiro; Ebata, Tomoki; Yokoyama, Yukihiro; Sugawara, Gen; Mizuno, Takashi; Yamaguchi, Junpei; Mori, Kensaku; Nagino, Masato

    2017-03-01

    Fusion angiography using reconstructed multidetector-row computed tomography (MDCT) images, and cholangiography using reconstructed images from MDCT with a cholangiographic agent include an anatomical gap due to the different periods of MDCT scanning. To conquer such gaps, we attempted to develop a cholangiography procedure that automatically reconstructs a cholangiogram from portal-phase MDCT images. The automatically produced cholangiography procedure utilized an original software program that was developed by the Graduate School of Information Science, Nagoya University. This program structured 5 candidate biliary tracts, and automatically selected one as the candidate for cholangiography. The clinical value of the automatically produced cholangiography procedure was estimated based on a comparison with manually produced cholangiography. Automatically produced cholangiograms were reconstructed for 20 patients who underwent MDCT scanning before biliary drainage for distal biliary obstruction. The procedure showed the ability to extract the 5 main biliary branches and the 21 subsegmental biliary branches in 55 and 25 % of the cases, respectively. The extent of aberrant connections and aberrant extractions outside the biliary tract was acceptable. Among all of the cholangiograms, 5 were clinically applied with no correction, 8 were applied with modest improvements, and 3 produced a correct cholangiography before automatic selection. Although our procedure requires further improvement based on the analysis of additional patient data, it may represent an alternative to direct cholangiography in the future.

  14. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network.

    PubMed

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R

    2016-08-15

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors.

  15. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network

    PubMed Central

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R.

    2016-01-01

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors. PMID:27537878

  16. Pothole Detection System Using a Black-box Camera.

    PubMed

    Jo, Youngtae; Ryu, Seungki

    2015-11-19

    Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  17. Automated analysis of long-term bridge behavior and health using a cyber-enabled wireless monitoring system

    NASA Astrophysics Data System (ADS)

    O'Connor, Sean M.; Zhang, Yilan; Lynch, Jerome; Ettouney, Mohammed; van der Linden, Gwen

    2014-04-01

    A worthy goal for the structural health monitoring field is the creation of a scalable monitoring system architecture that abstracts many of the system details (e.g., sensors, data) from the structure owner with the aim of providing "actionable" information that aids in their decision making process. While a broad array of sensor technologies have emerged, the ability for sensing systems to generate large amounts of data have far outpaced advances in data management and processing. To reverse this trend, this study explores the creation of a cyber-enabled wireless SHM system for highway bridges. The system is designed from the top down by considering the damage mechanisms of concern to bridge owners and then tailoring the sensing and decision support system around those concerns. The enabling element of the proposed system is a powerful data repository system termed SenStore. SenStore is designed to combine sensor data with bridge meta-data (e.g., geometric configuration, material properties, maintenance history, sensor locations, sensor types, inspection history). A wireless sensor network deployed to a bridge autonomously streams its measurement data to SenStore via a 3G cellular connection for storage. SenStore securely exposes the bridge meta- and sensor data to software clients that can process the data to extract information relevant to the decision making process of the bridge owner. To validate the proposed cyber-enable SHM system, the system is implemented on the Telegraph Road Bridge (Monroe, MI). The Telegraph Road Bridge is a traditional steel girder-concrete deck composite bridge located along a heavily travelled corridor in the Detroit metropolitan area. A permanent wireless sensor network has been installed to measure bridge accelerations, strains and temperatures. System identification and damage detection algorithms are created to automatically mine bridge response data stored in SenStore over an 18-month period. Tools like Gaussian Process (GP) regression are used to predict changes in the bridge behavior as a function of environmental parameters. Based on these analyses, pertinent behavioral information relevant to bridge management is autonomously extracted.

  18. Displacement-dispersive liquid-liquid microextraction based on solidification of floating organic drop of trace amounts of palladium in water and road dust samples prior to graphite furnace atomic absorption spectrometry determination.

    PubMed

    Ghanbarian, Maryam; Afzali, Daryoush; Mostafavi, Ali; Fathirad, Fariba

    2013-01-01

    A new displacement-dispersive liquid-liquid microextraction method based on the solidification of floating organic drop was developed for separation and preconcentration of Pd(ll) in road dust and aqueous samples. This method involves two steps of dispersive liquid-liquid microextraction based on solidification. In Step 1, Cu ions react with diethyldithiocarbamate (DDTC) to form Cu-DDTC complex, which is extracted by dispersive liquid-liquid microextraction based on a solidification procedure using 1-undecanol (extraction solvent) and ethanol (dispersive solvent). In Step 2, the extracted complex is first dispersed using ethanol in a sample solution containing Pd ions, then a dispersive liquid-liquid microextraction based on a solidification procedure is performed creating an organic drop. In this step, Pd(ll) replaces Cu(ll) from the pre-extracted Cu-DDTC complex and goes into the extraction solvent phase. Finally, the Pd(ll)-containing drop is introduced into a graphite furnace using a microsyringe, and Pd(ll) is determined using atomic absorption spectrometry. Several factors that influence the extraction efficiency of Pd and its subsequent determination, such as extraction and dispersive solvent type and volume, pH of sample solution, centrifugation time, and concentration of DDTC, are optimized.

  19. Development of an automatic cow body condition scoring using body shape signature and Fourier descriptors.

    PubMed

    Bercovich, A; Edan, Y; Alchanatis, V; Moallem, U; Parmet, Y; Honig, H; Maltz, E; Antler, A; Halachmi, I

    2013-01-01

    Body condition evaluation is a common tool to assess energy reserves of dairy cows and to estimate their fatness or thinness. This study presents a computer-vision tool that automatically estimates cow's body condition score. Top-view images of 151 cows were collected on an Israeli research dairy farm using a digital still camera located at the entrance to the milking parlor. The cow's tailhead area and its contour were segmented and extracted automatically. Two types of features of the tailhead contour were extracted: (1) the angles and distances between 5 anatomical points; and (2) the cow signature, which is a 1-dimensional vector of the Euclidean distances from each point in the normalized tailhead contour to the shape center. Two methods were applied to describe the cow's signature and to reduce its dimension: (1) partial least squares regression, and (2) Fourier descriptors of the cow signature. Three prediction models were compared with manual scores of an expert. Results indicate that (1) it is possible to automatically extract and predict body condition from color images without any manual interference; and (2) Fourier descriptors of the cow's signature result in improved performance (R(2)=0.77). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Automatically extracting the significant aspects evaluated in game reviews

    NASA Astrophysics Data System (ADS)

    Fong, Chiok Hoong; Ng, Yen Kaow

    2017-04-01

    Understanding the criteria (or "aspects") that reviewers use to evaluate games is important to game developers and publishers, since this will give them important input on how to improve their products. Techniques for the extraction of such aspects have been studied by others, albeit not specific to the gaming industry. In this paper we demonstrate an aspect extraction and analysis system specific to computer games. The system extracts game review texts from a list of known websites and automatically extracts candidate aspects from the review text using techniques from natural language processing and sentiment analysis. It then ranks the candidate aspects using the HITS algorithm. To evaluate the correctness of the extracted aspects, we used the system to calculate an overall score for each game by aggregating its highly-rated aspects, weighted by the importance of the respective aspects. The aggregated scores resulted in a ranking of games, which we compared to a known ranking from a popular website - the rankings showed overall consistency, which suggests that the system has extracted valuable aspects from the reviews. Using the extracted aspect, our system also facilitates the analysis of a game, by evaluating how review articles have rated its performance in these extracted aspects.

  1. Comparison of methods of DNA extraction for real-time PCR in a model of pleural tuberculosis.

    PubMed

    Santos, Ana; Cremades, Rosa; Rodríguez, Juan Carlos; García-Pachón, Eduardo; Ruiz, Montserrat; Royo, Gloria

    2010-01-01

    Molecular methods have been reported to have different sensitivities in the diagnosis of pleural tuberculosis and this may in part be caused by the use of different methods of DNA extraction. Our study compares nine DNA extraction systems in an experimental model of pleural tuberculosis. An inoculum of Mycobacterium tuberculosis was added to 23 pleural liquid samples with different characteristics. DNA was subsequently extracted using nine different methods (seven manual and two automatic) for analysis with real-time PCR. Only two methods were able to detect the presence of M. tuberculosis DNA in all the samples: extraction using columns (Qiagen) and automated extraction with the TNAI system (Roche). The automatic method is more expensive, but requires less time. Almost all the false negatives were because of the difficulty involved in extracting M. tuberculosis DNA, as in general, all the methods studied are capable of eliminating inhibitory substances that block the amplification reaction. The method of M. tuberculosis DNA extraction used affects the results of the diagnosis of pleural tuberculosis by molecular methods. DNA extraction systems that have been shown to be effective in pleural liquid should be used.

  2. A knowledge engineering approach to recognizing and extracting sequences of nucleic acids from scientific literature.

    PubMed

    García-Remesal, Miguel; Maojo, Victor; Crespo, José

    2010-01-01

    In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.

  3. A new harvest operation cost model to evaluate forest harvest layout alternatives

    Treesearch

    Mark M. Clark; Russell D. Meller; Timothy P. McDonald; Chao Chi Ting

    1997-01-01

    The authors develop a new model for harvest operation costs that can be used to evaluate stands for potential harvest. The model is based on felling, extraction, and access costs, and is unique in its consideration of the interaction between harvest area shapes and access roads. The scientists illustrate the model and evaluate the impact of stand size, volume, and road...

  4. Automated Assessment of Child Vocalization Development Using LENA

    ERIC Educational Resources Information Center

    Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-01-01

    Purpose: To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method: Assessment was based on full-day audio…

  5. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    PubMed

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Automatic digital image analysis for identification of mitotic cells in synchronous mammalian cell cultures.

    PubMed

    Eccles, B A; Klevecz, R R

    1986-06-01

    Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.

  7. Automatic Author Profiling of Online Chat Logs

    DTIC Science & Technology

    2007-03-01

    CLASSIFICATION WITH PRIOR ..........91 1. All Test Data ................................91 2. Extracted Test Data: Teens and 20s ...........92 3...Extracted Test Data: Teens and 30s ...........92 4. Extracted Test Data: Teens and 40s ...........93 5. Extracted Test Data: Teens and 50s ...........93 6...Data ................................97 C. AGE: BINARY CLASSIFICATION WITH PRIOR .............98 1. Extracted Test Data: Teens and 20s ...........98 2

  8. Automatic indexing of scanned documents: a layout-based approach

    NASA Astrophysics Data System (ADS)

    Esser, Daniel; Schuster, Daniel; Muthmann, Klemens; Berger, Michael; Schill, Alexander

    2012-01-01

    Archiving official written documents such as invoices, reminders and account statements in business and private area gets more and more important. Creating appropriate index entries for document archives like sender's name, creation date or document number is a tedious manual work. We present a novel approach to handle automatic indexing of documents based on generic positional extraction of index terms. For this purpose we apply the knowledge of document templates stored in a common full text search index to find index positions that were successfully extracted in the past.

  9. The Extraction of Terrace in the Loess Plateau Based on radial method

    NASA Astrophysics Data System (ADS)

    Liu, W.; Li, F.

    2016-12-01

    The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.

  10. The effect of resurfacing on friction, speeds and safety on main roads in Finland.

    PubMed

    Leden, L; Hämäläinen, O; Manninen, E

    1998-01-01

    This study aimed at examining how resurfacing and the first winter period after resurfacing affect the safety of main roads in Finland. The study consisted of three substudies. In the first substudy the changes of side friction and lock braking friction were measured on newly paved roads after resurfacing and after the first winter period. The effect of different resurfacing methods was also compared in the course of the study. All the 50 road sections in the study were resurfaced in summer 1991 and measured with the friction truck of the Technical Research Centre of Finland (VTT). Friction was found to be highly dependent on the type of resurfacing treatment. In general, the friction of surfaces with high coefficients after resurfacing decrease and the lowest frictions increase with time, locked braking friction values immediately after resurfacing can be undesirably low. The second substudy dealt with the effect of resurfacing on the vehicle speeds. The analysis was based on automatic speed and weather measurement in 1991 and 1992 on resurfaced roads, which were resurfaced in the summer 1991 and on a sample of comparison roads which had not been resurfaced. There is little change in speeds on the non-resurfaced roads during the study period, but there is some indication that resurfacing increases the average speeds, at least when the road is dry. Complete data were available for only one site, where the result was that average speeds on dry roads increased after resurfacing by 0.6 km/h and increased still more (by 0.5 km/h) after the first winter period. The third substudy analysed fatal and injury accidents reported to the police on the resurfaced and comparison roads one and two years before, the same year resurfacing was performed and one and two years after the resurfacing. The accident results were similar to the speed findings. The most likely effect is a risk increase immediately after resurfacing by somewhat less than 7% and of 3 to 7% of the first winter period. These results are, however, subject to large uncertainty because of the small number of accidents on the treatment roads.

  11. Sensor data fusion for textured reconstruction and virtual representation of alpine scenes

    NASA Astrophysics Data System (ADS)

    Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter

    2017-10-01

    The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.

  12. Generation of 2D Land Cover Maps for Urban Areas Using Decision Tree Classification

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2014-09-01

    A 2D land cover map can automatically and efficiently be generated from high-resolution multispectral aerial images. First, a digital surface model is produced and each cell of the elevation model is then supplemented with attributes. A decision tree classification is applied to extract map objects like buildings, roads, grassland, trees, hedges, and walls from such an "intelligent" point cloud. The decision tree is derived from training areas which borders are digitized on top of a false-colour orthoimage. The produced 2D land cover map with six classes is then subsequently refined by using image analysis techniques. The proposed methodology is described step by step. The classification, assessment, and refinement is carried out by the open source software "R"; the generation of the dense and accurate digital surface model by the "Match-T DSM" program of the Trimble Company. A practical example of a 2D land cover map generation is carried out. Images of a multispectral medium-format aerial camera covering an urban area in Switzerland are used. The assessment of the produced land cover map is based on class-wise stratified sampling where reference values of samples are determined by means of stereo-observations of false-colour stereopairs. The stratified statistical assessment of the produced land cover map with six classes and based on 91 points per class reveals a high thematic accuracy for classes "building" (99 %, 95 % CI: 95 %-100 %) and "road and parking lot" (90 %, 95 % CI: 83 %-95 %). Some other accuracy measures (overall accuracy, kappa value) and their 95 % confidence intervals are derived as well. The proposed methodology has a high potential for automation and fast processing and may be applied to other scenes and sensors.

  13. Spatiotemporal responses of dengue fever transmission to the road network in an urban area.

    PubMed

    Li, Qiaoxuan; Cao, Wei; Ren, Hongyan; Ji, Zhonglin; Jiang, Huixian

    2018-07-01

    Urbanization is one of the important factors leading to the spread of dengue fever. Recently, some studies found that the road network as an urbanization factor affects the distribution and spread of dengue epidemic, but the study of relationship between the distribution of dengue epidemic and road network is limited, especially in highly urbanized areas. This study explores the temporal and spatial spread characteristics of dengue fever in the distribution of road network by observing a dengue epidemic in the southern Chinese cities. Geographic information technology is used to extract the spatial location of cases and explore the temporal and spatial changes of dengue epidemic and its spatial relationship with road network. The results showed that there was a significant "severe" period in the temporal change of dengue epidemic situation, and the cases were mainly concentrated in the vicinity of narrow roads, the spread of the epidemic mainly along the high-density road network area. These results show that high-density road network is an important factor to the direction and scale of dengue epidemic. This information may be helpful to the development of related epidemic prevention and control strategies. Copyright © 2018. Published by Elsevier B.V.

  14. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    PubMed

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  15. Tyre-road friction coefficient estimation based on tyre sensors and lateral tyre deflection: modelling, simulations and experiments

    NASA Astrophysics Data System (ADS)

    Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco

    2013-05-01

    The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.

  16. UFCN: a fully convolutional neural network for road extraction in RGB imagery acquired by remote sensing from an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Kestur, Ramesh; Farooq, Shariq; Abdal, Rameen; Mehraj, Emad; Narasipura, Omkar; Mudigere, Meenavathi

    2018-01-01

    Road extraction in imagery acquired by low altitude remote sensing (LARS) carried out using an unmanned aerial vehicle (UAV) is presented. LARS is carried out using a fixed wing UAV with a high spatial resolution vision spectrum (RGB) camera as the payload. Deep learning techniques, particularly fully convolutional network (FCN), are adopted to extract roads by dense semantic segmentation. The proposed model, UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information. The limited dataset (76 images and their ground truths) is subjected to real-time data augmentation during training phase to increase the size effectively. Classification performance is evaluated using precision, recall, accuracy, F1 score, and brier score parameters. The performance is compared with support vector machine (SVM) classifier, a one-dimensional convolutional neural network (1D-CNN) model, and a standard two-dimensional CNN (2D-CNN). The UFCN model outperforms the SVM, 1D-CNN, and 2D-CNN models across all the performance parameters. Further, the prediction time of the proposed UFCN model is comparable with SVM, 1D-CNN, and 2D-CNN models.

  17. Small passenger car transmission test-Chevrolet 200 transmission

    NASA Technical Reports Server (NTRS)

    Bujold, M. P.

    1980-01-01

    The small passenger car transmission was tested to supply electric vehicle manufacturers with technical information regarding the performance of commerically available transmissions which would enable them to design a more energy efficient vehicle. With this information the manufacturers could estimate vehicle driving range as well as speed and torque requirements for specific road load performance characteristics. A 1979 Chevrolet Model 200 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. The transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. Torque, speed and efficiency curves map the complete performance characteristics for Chevrolet Model 200 transmission.

  18. Designation and verification of road markings detection and guidance method

    NASA Astrophysics Data System (ADS)

    Wang, Runze; Jian, Yabin; Li, Xiyuan; Shang, Yonghong; Wang, Jing; Zhang, JingChuan

    2018-01-01

    With the rapid development of China's space industry, digitization and intelligent is the tendency of the future. This report is present a foundation research about guidance system which based on the HSV color space. With the help of these research which will help to design the automatic navigation and parking system for the frock transport car and the infrared lamp homogeneity intelligent test equipment. The drive mode, steer mode as well as the navigation method was selected. In consideration of the practicability, it was determined to use the front-wheel-steering chassis. The steering mechanism was controlled by the stepping motors, and it is guided by Machine Vision. The optimization and calibration of the steering mechanism was made. A mathematical model was built and the objective functions was constructed for the steering mechanism. The extraction method of the steering line was studied and the motion controller was designed and optimized. The theory of HSV, RGB color space and analysis of the testing result will be discussed Using the function library OPENCV on the Linux system to fulfill the camera calibration. Based on the HSV color space to design the guidance algorithm.

  19. A novel technique for optimal integration of active steering and differential braking with estimation to improve vehicle directional stability.

    PubMed

    Mirzaeinejad, Hossein; Mirzaei, Mehdi; Rafatnia, Sadra

    2018-06-11

    This study deals with the enhancement of directional stability of vehicle which turns with high speeds on various road conditions using integrated active steering and differential braking systems. In this respect, the minimum usage of intentional asymmetric braking force to compensate the drawbacks of active steering control with small reduction of vehicle longitudinal speed is desired. To this aim, a new optimal multivariable controller is analytically developed for integrated steering and braking systems based on the prediction of vehicle nonlinear responses. A fuzzy programming extracted from the nonlinear phase plane analysis is also used for managing the two control inputs in various driving conditions. With the proposed fuzzy programming, the weight factors of the control inputs are automatically tuned and softly changed. In order to simulate a real-world control system, some required information about the system states and parameters which cannot be directly measured, are estimated using the Unscented Kalman Filter (UKF). Finally, simulations studies are carried out using a validated vehicle model to show the effectiveness of the proposed integrated control system in the presence of model uncertainties and estimation errors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Application of multivariate statistical analysis in the pollution and health risk of traffic-related heavy metals.

    PubMed

    Ebqa'ai, Mohammad; Ibrahim, Bashar

    2017-12-01

    This study aims to analyse the heavy metal pollutants in Jeddah, the second largest city in the Gulf Cooperation Council with a population exceeding 3.5 million, and many vehicles. Ninety-eight street dust samples were collected seasonally from the six major roads as well as the Jeddah Beach, and subsequently digested using modified Leeds Public Analyst method. The heavy metals (Fe, Zn, Mn, Cu, Cd, and Pb) were extracted from the ash using methyl isobutyl ketone as solvent extraction and eventually analysed by atomic absorption spectroscopy. Multivariate statistical techniques, principal component analysis (PCA), and hierarchical cluster analysis were applied to these data. Heavy metal concentrations were ranked according to the following descending order: Fe > Zn > Mn > Cu > Pb > Cd. In order to study the pollution and health risk from these heavy metals as well as estimating their effect on the environment, pollution indices, integrated pollution index, enrichment factor, daily dose average, hazard quotient, and hazard index were all analysed. The PCA showed high levels of Zn, Fe, and Cd in Al Kurnish road, while these elements were consistently detected on King Abdulaziz and Al Madina roads. The study indicates that high levels of Zn and Pb pollution were recorded for major roads in Jeddah. Six out of seven roads had high pollution indices. This study is the first step towards further investigations into current health problems in Jeddah, such as anaemia and asthma.

  1. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  2. Cerebral Oxygenation and Pain of Heel Blood Sampling Using Manual and Automatic Lancets in Premature Infants.

    PubMed

    Hwang, Mi-Jung; Seol, Geun Hee

    2015-01-01

    Heel blood sampling is a common but painful procedure for neonates. Automatic lancets have been shown to be more effective, with reduced pain and tissue damage, than manual lancets, but the effects of lancet type on cortical activation have not yet been compared. The study aimed to compare the effects of manual and automatic lancets on cerebral oxygenation and pain of heel blood sampling in 24 premature infants with respiratory distress syndrome. Effectiveness was measured by assessing numbers of pricks and squeezes and duration of heel blood sampling. Pain responses were measured using the premature infant pain profile score, heart rate, and oxygen saturation (SpO2). Regional cerebral oxygen saturation (rScO2) was measured using near-infrared spectroscopy, and cerebral fractional tissue oxygen extraction was calculated from SpO2 and rScO. Measures of effectiveness were significantly better with automatic than with manual lancing, including fewer heel punctures (P = .009) and squeezes (P < .001) and shorter duration of heel blood sampling (P = .002). rScO2 was significantly higher (P = .013) and cerebral fractional tissue oxygen extraction after puncture significantly lower (P = .040) with automatic lancing. Premature infant pain profile scores during (P = .004) and after (P = .048) puncture were significantly lower in the automatic than in the manual lancet group. Automatic lancets for heel blood sampling in neonates with respiratory distress syndrome significantly reduced pain and enhanced cerebral oxygenation, suggesting that heel blood should be sampled routinely using an automatic lancet.

  3. [Determination of cadmium by HG-aFS in soil of virescent zone in Chengdu city].

    PubMed

    Chen, Yuan; Zeng, Ying; Wu, Hong-ji; Wang, Qin-er

    2008-12-01

    The different speciations of cadmium in soil samples from Chengdu greenbelt were extracted by Tessier sequential extraction method. The contents of total cadmium and different speciation cadmium were determined using HG-AFS. Under optimization condition of HG-AFS and using 2% HCl as medium, and 30 g x L(-1) KBH4 as reductive reagent, 1 mg x L(-1) Co2+ acting together with 10 g x L(-1) CH4N2S can advance the generating efficiency of cadmium compound. The effects of the coexisting elements in soil on the determination of cadmium can be reduced if certain amount of Na4P2O7, K2SO4 and BaCl2 are added. The linear range is 0-10 mg x L(-1) with r=0.9991 and the detection limit is 0.016 mg x L(-1). The recovery is 97.80%-100.2% with RSD of 1.93%. The analytical method is very sensitive and accurate. The distribution of average percentage of five speciations of cadmium in experimental soil samples is: residual fraction (62.1%) > exchangeable fraction (11.7%) > Fe-Mn oxide-bound (9.71%) > carbonate-bound (4.17%) > organic-bound (3.47%). Although residual fraction is the main speciation of cadmium in soil, the content of exchangeable fraction is relatively high. Thus the bioactivity of cadmium in the research area should be recognized. The concentration of cadmium exceeds the country standard in 19 soil sample, accounting for 86. 4% of all soil samples. The soil from Chengdu greenbelt located in 1st ring road, 2nd ring road and 3rd ring road was polluted to different degree. The relative pollution magnitude of them is: 2nd ring road > 1st ring road > 3rd ring road.

  4. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    PubMed

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  5. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora

    PubMed Central

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2017-01-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes. PMID:28210517

  6. An expert-based approach to forest road network planning by combining Delphi and spatial multi-criteria evaluation.

    PubMed

    Hayati, Elyas; Majnounian, Baris; Abdi, Ehsan; Sessions, John; Makhdoum, Majid

    2013-02-01

    Changes in forest landscapes resulting from road construction have increased remarkably in the last few years. On the other hand, the sustainable management of forest resources can only be achieved through a well-organized road network. In order to minimize the environmental impacts of forest roads, forest road managers must design the road network efficiently and environmentally as well. Efficient planning methodologies can assist forest road managers in considering the technical, economic, and environmental factors that affect forest road planning. This paper describes a three-stage methodology using the Delphi method for selecting the important criteria, the Analytic Hierarchy Process for obtaining the relative importance of the criteria, and finally, a spatial multi-criteria evaluation in a geographic information system (GIS) environment for identifying the lowest-impact road network alternative. Results of the Delphi method revealed that ground slope, lithology, distance from stream network, distance from faults, landslide susceptibility, erosion susceptibility, geology, and soil texture are the most important criteria for forest road planning in the study area. The suitability map for road planning was then obtained by combining the fuzzy map layers of these criteria with respect to their weights. Nine road network alternatives were designed using PEGGER, an ArcView GIS extension, and finally, their values were extracted from the suitability map. Results showed that the methodology was useful for identifying road that met environmental and cost considerations. Based on this work, we suggest future work in forest road planning using multi-criteria evaluation and decision making be considered in other regions and that the road planning criteria identified in this study may be useful.

  7. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  8. Automated Extraction and Classification of Cancer Stage Mentions fromUnstructured Text Fields in a Central Cancer Registry

    PubMed Central

    AAlAbdulsalam, Abdulrahman K.; Garvin, Jennifer H.; Redd, Andrew; Carter, Marjorie E.; Sweeny, Carol; Meystre, Stephane M.

    2018-01-01

    Cancer stage is one of the most important prognostic parameters in most cancer subtypes. The American Joint Com-mittee on Cancer (AJCC) specifies criteria for staging each cancer type based on tumor characteristics (T), lymph node involvement (N), and tumor metastasis (M) known as TNM staging system. Information related to cancer stage is typically recorded in clinical narrative text notes and other informal means of communication in the Electronic Health Record (EHR). As a result, human chart-abstractors (known as certified tumor registrars) have to search through volu-minous amounts of text to extract accurate stage information and resolve discordance between different data sources. This study proposes novel applications of natural language processing and machine learning to automatically extract and classify TNM stage mentions from records at the Utah Cancer Registry. Our results indicate that TNM stages can be extracted and classified automatically with high accuracy (extraction sensitivity: 95.5%–98.4% and classification sensitivity: 83.5%–87%). PMID:29888032

  9. Automated Extraction and Classification of Cancer Stage Mentions fromUnstructured Text Fields in a Central Cancer Registry.

    PubMed

    AAlAbdulsalam, Abdulrahman K; Garvin, Jennifer H; Redd, Andrew; Carter, Marjorie E; Sweeny, Carol; Meystre, Stephane M

    2018-01-01

    Cancer stage is one of the most important prognostic parameters in most cancer subtypes. The American Joint Com-mittee on Cancer (AJCC) specifies criteria for staging each cancer type based on tumor characteristics (T), lymph node involvement (N), and tumor metastasis (M) known as TNM staging system. Information related to cancer stage is typically recorded in clinical narrative text notes and other informal means of communication in the Electronic Health Record (EHR). As a result, human chart-abstractors (known as certified tumor registrars) have to search through volu-minous amounts of text to extract accurate stage information and resolve discordance between different data sources. This study proposes novel applications of natural language processing and machine learning to automatically extract and classify TNM stage mentions from records at the Utah Cancer Registry. Our results indicate that TNM stages can be extracted and classified automatically with high accuracy (extraction sensitivity: 95.5%-98.4% and classification sensitivity: 83.5%-87%).

  10. Extracting the Textual and Temporal Structure of Supercomputing Logs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, S; Singh, I; Chandra, A

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less

  11. Inter-comparison of Methods for Extracting Subsurface Layers from SHARAD Radargrams over Martian polar regions

    NASA Astrophysics Data System (ADS)

    Xiong, S.; Muller, J.-P.; Carretero, R. C.

    2017-09-01

    Subsurface layers are preserved in the polar regions on Mars, representing a record of past climate changes on Mars. Orbital radar instruments, such as the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) onboard ESA Mars Express (MEX) and the SHAllow RADar (SHARAD) onboard the Mars Reconnaissance Orbiter (MRO), transmit radar signals to Mars and receive a set of return signals from these subsurface regions. Layering is a prominent subsurface feature, which has been revealed by both MARSIS and SHARAD radargrams over both polar regions on Mars. Automatic extraction of these subsurface layering is becoming increasingly important as there is now over ten years' of data archived. In this study, we investigate two different methods for extracting these subsurface layers from SHARAD data and compare the results against delineated layers derived manually to validate which methods is better for extracting these layers automatically.

  12. A new blood vessel extraction technique using edge enhancement and object classification.

    PubMed

    Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin

    2013-12-01

    Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.

  13. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  14. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  15. Automatic detection of Martian dark slope streaks by machine learning using HiRISE images

    NASA Astrophysics Data System (ADS)

    Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui

    2017-07-01

    Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.

  16. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  17. Automated prediction of protein function and detection of functional sites from structure.

    PubMed

    Pazos, Florencio; Sternberg, Michael J E

    2004-10-12

    Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.

  18. An Overview of Biomolecular Event Extraction from Scientific Documents

    PubMed Central

    Vanegas, Jorge A.; Matos, Sérgio; González, Fabio; Oliveira, José L.

    2015-01-01

    This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed. PMID:26587051

  19. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  20. A Unified Overset Grid Generation Graphical Interface and New Concepts on Automatic Gridding Around Surface Discontinuities

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Akien, Edwin (Technical Monitor)

    2002-01-01

    For many years, generation of overset grids for complex configurations has required the use of a number of different independently developed software utilities. Results created by each step were then visualized using a separate visualization tool before moving on to the next. A new software tool called OVERGRID was developed which allows the user to perform all the grid generation steps and visualization under one environment. OVERGRID provides grid diagnostic functions such as surface tangent and normal checks as well as grid manipulation functions such as extraction, extrapolation, concatenation, redistribution, smoothing, and projection. Moreover, it also contains hyperbolic surface and volume grid generation modules that are specifically suited for overset grid generation. It is the first time that such a unified interface existed for the creation of overset grids for complex geometries. New concepts on automatic overset surface grid generation around surface discontinuities will also be briefly presented. Special control curves on the surface such as intersection curves, sharp edges, open boundaries, are called seam curves. The seam curves are first automatically extracted from a multiple panel network description of the surface. Points where three or more seam curves meet are automatically identified and are called seam corners. Seam corner surface grids are automatically generated using a singular axis topology. Hyperbolic surface grids are then grown from the seam curves that are automatically trimmed away from the seam corners.

  1. Convolution neural-network-based detection of lung structures

    NASA Astrophysics Data System (ADS)

    Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    1994-05-01

    Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.

  2. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  3. Earliest tea as evidence for one branch of the Silk Road across the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Lu, Houyuan; Zhang, Jianping; Yang, Yimin; Yang, Xiaoyan; Xu, Baiqing; Yang, Wuzhan; Tong, Tao; Jin, Shubo; Shen, Caiming; Rao, Huiyun; Li, Xingguo; Lu, Hongliang; Fuller, Dorian Q.; Wang, Luo; Wang, Can; Xu, Deke; Wu, Naiqin

    2016-01-01

    Phytoliths and biomolecular components extracted from ancient plant remains from Chang’an (Xi’an, the city where the Silk Road begins) and Ngari (Ali) in western Tibet, China, show that the tea was grown 2100 years ago to cater for the drinking habits of the Western Han Dynasty (207BCE-9CE), and then carried toward central Asia by ca.200CE, several hundred years earlier than previously recorded. The earliest physical evidence of tea from both the Chang’an and Ngari regions suggests that a branch of the Silk Road across the Tibetan Plateau, was established by the second to third century CE.

  4. Earliest tea as evidence for one branch of the Silk Road across the Tibetan Plateau.

    PubMed

    Lu, Houyuan; Zhang, Jianping; Yang, Yimin; Yang, Xiaoyan; Xu, Baiqing; Yang, Wuzhan; Tong, Tao; Jin, Shubo; Shen, Caiming; Rao, Huiyun; Li, Xingguo; Lu, Hongliang; Fuller, Dorian Q; Wang, Luo; Wang, Can; Xu, Deke; Wu, Naiqin

    2016-01-07

    Phytoliths and biomolecular components extracted from ancient plant remains from Chang'an (Xi'an, the city where the Silk Road begins) and Ngari (Ali) in western Tibet, China, show that the tea was grown 2100 years ago to cater for the drinking habits of the Western Han Dynasty (207BCE-9CE), and then carried toward central Asia by ca.200CE, several hundred years earlier than previously recorded. The earliest physical evidence of tea from both the Chang'an and Ngari regions suggests that a branch of the Silk Road across the Tibetan Plateau, was established by the second to third century CE.

  5. Multiuse trail intersection safety analysis: A crowdsourced data perspective.

    PubMed

    Jestico, Ben; Nelson, Trisalyn A; Potter, Jason; Winters, Meghan

    2017-06-01

    Real and perceived concerns about cycling safety are a barrier to increased ridership in many cities. Many people prefer to bike on facilities separated from motor vehicles, such as multiuse trails. However, due to underreporting, cities lack data on bike collisions, especially along greenways and multiuse paths. We used a crowdsourced cycling incident dataset (2005-2016) from BikeMaps.org for the Capital Regional District (CRD), BC, Canada. Our goal was to identify design characteristics associated with unsafe intersections between multiuse trails and roads. 92.8% of mapped incidents occurred between 2014 and 2016. We extracted both collision and near miss incidents at intersections from BikeMaps.org. We conducted site observations at 32 intersections where a major multiuse trail intersected with roads. We compared attributes of reported incidents at multiuse trail-road intersections to those at road-road intersections. We then used negative binomial regression to model the relationship between the number of incidents and the infrastructure characteristics at multiuse trail-road intersections. We found a higher proportion of collisions (38%, or 17/45 total reports) at multiuse trail-road intersections compared to road-road intersections (23%, or 62/268 total reports). A higher proportion of incidents resulted in an injury at multiuse trail-road intersections compared to road-road intersections (33% versus 15%). Cycling volumes, vehicle volumes, and trail sight distance were all associated with incident frequency at multiuse trail-road intersections. Supplementing traditional crash records with crowdsourced cycling incident data provides valuable evidence on cycling safety at intersections between multiuse trails and roads, and more generally, when conflicts occur between diverse transportation modes. Copyright © 2017. Published by Elsevier Ltd.

  6. The burden of road traffic crashes, injuries and deaths in Africa: a systematic review and meta-analysis

    PubMed Central

    Thompson, Jacqueline Y; Akanbi, Moses A; Azuh, Dominic; Samuel, Victoria; Omoregbe, Nicholas; Ayo, Charles K

    2016-01-01

    Abstract Objective To estimate the burden of road traffic injuries and deaths for all road users and among different road user groups in Africa. Methods We searched MEDLINE, EMBASE, Global Health, Google Scholar, websites of African road safety agencies and organizations for registry- and population-based studies and reports on road traffic injury and death estimates in Africa, published between 1980 and 2015. Available data for all road users and by road user group were extracted and analysed. We conducted a random-effects meta-analysis and estimated pooled rates of road traffic injuries and deaths. Findings We identified 39 studies from 15 African countries. The estimated pooled rate for road traffic injury was 65.2 per 100 000 population (95% confidence interval, CI: 60.8–69.5) and the death rate was 16.6 per 100 000 population (95% CI: 15.2–18.0). Road traffic injury rates increased from 40.7 per 100 000 population in the 1990s to 92.9 per 100 000 population between 2010 and 2015, while death rates decreased from 19.9 per 100 000 population in the 1990s to 9.3 per 100 000 population between 2010 and 2015. The highest road traffic death rate was among motorized four-wheeler occupants at 5.9 per 100 000 population (95% CI: 4.4–7.4), closely followed by pedestrians at 3.4 per 100 000 population (95% CI: 2.5–4.2). Conclusion The burden of road traffic injury and death is high in Africa. Since registry-based reports underestimate the burden, a systematic collation of road traffic injury and death data is needed to determine the true burden. PMID:27429490

  7. Speciation distribution and mass balance of copper and zinc in urban rain, sediments, and road runoff.

    PubMed

    Zuo, Xiaojun; Fu, Dafang; Li, He

    2012-11-01

    Heavy metal pollution in road runoff had caused widespread concern since the last century. However, there are little references on metal speciation in multiple environmental media (e.g., rain, road sediments, and road runoff). Our research targeted the investigation of metal speciation in rain, road sediments, and runoff; the analysis of speciation variation and mass balance of metals among rain, road sediments, and runoff; the selection of main factors by principal component analysis (PCA); and the establishment of equation to evaluate the impact of rain and road sediments to metals in road runoff. Sequential extraction procedure contains five steps for the chemical fractionation of metals. Flame atomic absorption spectrometry (Shimadzu, AA-6800) was used to determine metal speciation concentration, as well as the total and dissolved fractions. The dissolved fractions for both Cu and Zn were dominant in rain. The speciation distribution of Zn was different from that of Cu in road sediments, while speciation distribution of Zn is similar to that of Cu in runoff. The bound to carbonates for both Cu and Zn in road sediments were prone to be dissolved by rain. The levels of Cu and Zn in runoff were not obviously influenced by rain, but significantly influenced by road sediments. The masses for both Cu and Zn among rain, road sediments, and road runoff approximately meet the mass balance equation for all rainfall patterns. Five principal factors were selected for metal regression equation based on PCA, including rainfall, average rainfall intensity, antecedent dry periods, total suspended particles, and temperature. The established regression equations could be used to predict the effect of road runoff on receiving environments.

  8. Impacts of road salts on leaching behavior of lead contaminated soil.

    PubMed

    Wu, Jingjing; Kim, Hwidong

    2017-02-15

    Research was conducted to explore the effects of road salts on lead leaching from lead contaminated soil samples that were collected in an old residence area in Erie, PA. The synthetic precipitate leaching procedure (SPLP) test was employed to evaluate lead leaching from one of the lead contaminated soils in the presence of various levels of road salts (5%, 10%, 20%, 30% and 40%). The results of the leaching test showed that lead leaching dramatically increased as the road salt content increased as a result of the formation of lead-chloride complexes, but different lead leaching patterns were observed in the presence of NaCl- and CaCl 2 -based road salts at a high content of road salts (>20%). Additional leaching tests that include 30% road salts and different soil samples showed a variety of leaching patterns by soil samples. The sequential extraction of each soil sample showed that a high fraction of organic matter bound lead was associated with lead contamination. The higher the fraction of organic matter bound lead contained in soil, the greater the effects of calcium on reducing lead leaching, observations showed. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Singlet Oxygen Production by Illuminated Road Dust and Winter Street Sweepings

    NASA Astrophysics Data System (ADS)

    Schneider, S.; Gan, L.; Gao, S.; Hoy, K. S.; Kwasny, J. R.; Styler, S. A.

    2017-12-01

    Road dust is an important urban source of primary particulate matter, especially in cities where sand and other traction materials are applied to roadways in winter. Although the composition and detrimental health effects of road dust are reasonably well characterized, little is currently known regarding its chemical behaviour. Motivated by our previous work, in which we showed that road dust is a photochemical source of singlet oxygen (1O2), we investigated 1O2 production by bulk winter street sweepings and by road dust collected in a variety of urban, industrial, and suburban locations in both autumn and spring. In all cases, the production of 1O2 by road dust was greater than that by Arizona test dust and desert-sourced dust, which highlights the unique photochemical environment afforded by this substrate. Mechanistically, we observed correlations between 1O2 production and the UV absorbance properties of dust extracts, which suggests the involvement of chromophoric dissolved organic matter in the observed photochemistry. Taken together, this work provides evidence that road dust-mediated photochemistry may influence the environmental lifetime of pollutants that react via 1O2-mediated pathways, including polycyclic aromatic hydrocarbons.

  10. A statistical analysis of the impact of advertising signs on road safety.

    PubMed

    Yannis, George; Papadimitriou, Eleonora; Papantoniou, Panagiotis; Voulgari, Chrisoula

    2013-01-01

    This research aims to investigate the impact of advertising signs on road safety. An exhaustive review of international literature was carried out on the effect of advertising signs on driver behaviour and safety. Moreover, a before-and-after statistical analysis with control groups was applied on several road sites with different characteristics in the Athens metropolitan area, in Greece, in order to investigate the correlation between the placement or removal of advertising signs and the related occurrence of road accidents. Road accident data for the 'before' and 'after' periods on the test sites and the control sites were extracted from the database of the Hellenic Statistical Authority, and the selected 'before' and 'after' periods vary from 2.5 to 6 years. The statistical analysis shows no statistical correlation between road accidents and advertising signs in none of the nine sites examined, as the confidence intervals of the estimated safety effects are non-significant at 95% confidence level. This can be explained by the fact that, in the examined road sites, drivers are overloaded with information (traffic signs, directions signs, labels of shops, pedestrians and other vehicles, etc.) so that the additional information load from advertising signs may not further distract them.

  11. The effect of road and environmental characteristics on pedestrian hit-and-run accidents in Ghana.

    PubMed

    Aidoo, Eric Nimako; Amoh-Gyimah, Richard; Ackaah, Williams

    2013-04-01

    The number of pedestrians who have died as a result of being hit by vehicles has increased in recent years, in addition to vehicle passenger deaths. Many pedestrians who were involved in road traffic accident died as a result of the driver leaving the pedestrian who was struck unattended at the scene of the accident. This paper seeks to determine the effect of road and environmental characteristics on pedestrian hit-and-run accidents in Ghana. Using pedestrian accident data extracted from the National Road Traffic Accident Database at the Building and Road Research Institute (BRRI) of the Council for Scientific and Industrial Research (CSIR), Ghana, a binary logit model was employed in the analysis. The results from the estimated model indicate that fatal accidents, unclear weather, nighttime conditions, and straight and flat road sections without medians and junctions significantly increase the likelihood that the vehicle driver will leave the scene after hitting a pedestrian. Thus, integrating median separation and speed humps into road design and construction and installing street lights will help to curb the problem of pedestrian hit-and-run accidents in Ghana. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  13. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    NASA Astrophysics Data System (ADS)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  14. Automatic concept extraction from spoken medical reports.

    PubMed

    Happe, André; Pouliquen, Bruno; Burgun, Anita; Cuggia, Marc; Le Beux, Pierre

    2003-07-01

    The objective of this project is to investigate methods whereby a combination of speech recognition and automated indexing methods substitute for current transcription and indexing practices. We based our study on existing speech recognition software programs and on NOMINDEX, a tool that extracts MeSH concepts from medical text in natural language and that is mainly based on a French medical lexicon and on the UMLS. For each document, the process consists of three steps: (1) dictation and digital audio recording, (2) speech recognition, (3) automatic indexing. The evaluation consisted of a comparison between the set of concepts extracted by NOMINDEX after the speech recognition phase and the set of keywords manually extracted from the initial document. The method was evaluated on a set of 28 patient discharge summaries extracted from the MENELAS corpus in French, corresponding to in-patients admitted for coronarography. The overall precision was 73% and the overall recall was 90%. Indexing errors were mainly due to word sense ambiguity and abbreviations. A specific issue was the fact that the standard French translation of MeSH terms lacks diacritics. A preliminary evaluation of speech recognition tools showed that the rate of accurate recognition was higher than 98%. Only 3% of the indexing errors were generated by inadequate speech recognition. We discuss several areas to focus on to improve this prototype. However, the very low rate of indexing errors due to speech recognition errors highlights the potential benefits of combining speech recognition techniques and automatic indexing.

  15. Automation of liquid-liquid extraction-spectrophotometry using prolonged pseudo-liquid drops and handheld CCD for speciation of Cr(VI) and Cr(III) in water samples.

    PubMed

    Chen, Wen; Zhong, Guanping; Zhou, Zaide; Wu, Peng; Hou, Xiandeng

    2005-10-01

    A simple spectrophotometric system, based on a prolonged pseudo-liquid drop device as an optical cell and a handheld charge coupled device (CCD) as a detector, was constructed for automatic liquid-liquid extraction and spectrophotometric speciation of trace Cr(VI) and Cr(III) in water samples. A tungsten halogen lamp was used as the light source, and a laboratory-constructed T-tube with two open ends was used to form the prolonged pseudo-liquid drop inside the tube. In the medium of perchloric acid solution, Cr(VI) reacted with 1,5-diphenylcarbazide (DPC); the formed complex was automatically extracted into n-pentanol, with a preconcentration ratio of about 5. The organic phase with extracted chromium complex was then pumped through the optical cell for absorbance measurement at 548 nm. Under optimal conditions, the calibration curve was linear in the range of 7.5 - 350 microg L(-1), with a correlation coefficient of 0.9993. The limit of detection (3sigma) was 7.5 microg L(-1). That Cr(III) species cannot react with DPC, but can be oxidized to Cr(VI) prior to determination, is the basis of the speciation analysis. The proposed speciation analysis was sensitive, yet simple, labor-effective, and cost-effective. It has been preliminarily applied for the speciation of Cr(VI) and Cr(III) in spiked river and tap water samples. It can also be used for other automatic liquid-liquid extraction-spectrophotometric determinations.

  16. A Supporting Platform for Semi-Automatic Hyoid Bone Tracking and Parameter Extraction from Videofluoroscopic Images for the Diagnosis of Dysphagia Patients.

    PubMed

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Paik, Nam Jong; Ryu, Ju Seok; Kim, In Young

    2017-04-01

    Conventional kinematic analysis of videofluoroscopic (VF) swallowing image, most popular for dysphagia diagnosis, requires time-consuming and repetitive manual extraction of diagnostic information from multiple images representing one swallowing period, which results in a heavy work load for clinicians and excessive hospital visits for patients to receive counseling and prescriptions. In this study, a software platform was developed that can assist in the VF diagnosis of dysphagia by automatically extracting a two-dimensional moving trajectory of the hyoid bone as well as 11 temporal and kinematic parameters. Fifty VF swallowing videos containing both non-mandible-overlapped and mandible-overlapped cases from eight patients with dysphagia of various etiologies and 19 videos from ten healthy controls were utilized for performance verification. Percent errors of hyoid bone tracking were 1.7 ± 2.1% for non-overlapped images and 4.2 ± 4.8% for overlapped images. Correlation coefficients between manually extracted and automatically extracted moving trajectories of the hyoid bone were 0.986 ± 0.017 (X-axis) and 0.992 ± 0.006 (Y-axis) for non-overlapped images, and 0.988 ± 0.009 (X-axis) and 0.991 ± 0.006 (Y-axis) for overlapped images. Based on the experimental results, we believe that the proposed platform has the potential to improve the satisfaction of both clinicians and patients with dysphagia.

  17. Multitask assessment of roads and vehicles network (MARVN)

    NASA Astrophysics Data System (ADS)

    Yang, Fang; Yi, Meng; Cai, Yiran; Blasch, Erik; Sullivan, Nichole; Sheaff, Carolyn; Chen, Genshe; Ling, Haibin

    2018-05-01

    Vehicle detection in wide area motion imagery (WAMI) has drawn increasing attention from the computer vision research community in recent decades. In this paper, we present a new architecture for vehicle detection on road using multi-task network, which is able to detect and segment vehicles, estimate their pose, and meanwhile yield road isolation for a given region. The multi-task network consists of three components: 1) vehicle detection, 2) vehicle and road segmentation, and 3) detection screening. Segmentation and detection components share the same backbone network and are trained jointly in an end-to-end way. Unlike background subtraction or frame differencing based methods, the proposed Multitask Assessment of Roads and Vehicles Network (MARVN) method can detect vehicles which are slowing down, stopped, and/or partially occluded in a single image. In addition, the method can eliminate the detections which are located at outside road using yielded road segmentation so as to decrease the false positive rate. As few WAMI datasets have road mask and vehicles bounding box anotations, we extract 512 frames from WPAFB 2009 dataset and carefully refine the original annotations. The resulting dataset is thus named as WAMI512. We extensively compare the proposed method with state-of-the-art methods on WAMI512 dataset, and demonstrate superior performance in terms of efficiency and accuracy.

  18. A Study on Project Priority Evaluation Method on Road Slope Disaster Prevention Management

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Nobuyasu; Ohtsu, Hiroyasu; Izu, Ryuutarou

    To improve the safety and security of driving while coping with today's stagnant economy and frequent natural disasters, road slopes should be appropriately managed. To achieve the goals, road managers should establish project priority evaluation methods for each stage of road slope management by clarifying social losses that would result by drops in service levels. It is important that road managers evaluate a project priority properly to manage the road slope effectively. From this viewpoint, this study proposed "project priority evaluation methods" in road slope disaster prevention, which use available slope information at each stage of road slope management under limited funds. In addition, this study investigated the effect of managing it from the high slope of the priority by evaluating a risk of slope failure. In terms of the amount of available information, staged information provision is needed ranging from macroscopic studies, which involves evaluation of the entire route at each stage of decision making, to semi- and microscopic investigations for evaluating slopes, and microscopic investigations for evaluating individual slopes. With limited funds, additional detailed surveys are difficult to perform. It is effective to use the slope risk assessment system, which was constructed to complement detailed data, to extract sites to perform precise investigations.

  19. Exploration of Web Users' Search Interests through Automatic Subject Categorization of Query Terms.

    ERIC Educational Resources Information Center

    Pu, Hsiao-tieh; Yang, Chyan; Chuang, Shui-Lung

    2001-01-01

    Proposes a mechanism that carefully integrates human and machine efforts to explore Web users' search interests. The approach consists of a four-step process: extraction of core terms; construction of subject taxonomy; automatic subject categorization of query terms; and observation of users' search interests. Research findings are proved valuable…

  20. Practical automatic Arabic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  1. Salamander abundance along road edges and within abandoned logging roads in Appalachian forests.

    PubMed

    Semlitsch, Raymond D; Ryan, Travis J; Hamed, Kevin; Chatfield, Matt; Drehman, Bethany; Pekarek, Nicole; Spath, Mike; Watland, Angie

    2007-02-01

    Roads may be one of the most common disturbances in otherwise continuous forested habitat in the southern Appalachian Mountains. Despite their obvious presence on the landscape, there is limited data on the ecological effects along a road edge or the size of the "road-effect zone." We sampled salamanders at current and abandoned road sites within the Nantahala National Forest, North Carolina (U.S.A.) to determine the road-effect zone for an assemblage of woodland salamanders. Salamander abundance near the road was reduced significantly, and salamanders along the edges were predominantly large individuals. These results indicate that the road-effect zone for these salamanders extended 35 m on either side of the relatively narrow, low-use forest roads along which we sampled. Furthermore, salamander abundance was significantly lower on old, abandoned logging roads compared with the adjacent upslope sites. These results indicate that forest roads and abandoned logging roads have negative effects on forest-dependent species such as plethodontid salamanders. Our results may apply to other protected forests in the southern Appalachians and may exemplify a problem created by current and past land use activities in all forested regions, especially those related to road building for natural-resource extraction. Our results show that the effect of roads reached well beyond their boundary and that abandonment or the decommissioning of roads did not reverse detrimental ecological effects; rather, our results indicate that management decisions have significant repercussions for generations to come. Furthermore, the quantity of suitable forested habitat in the protected areas we studied was significantly reduced: between 28.6% and 36.9% of the area was affected by roads. Management and policy decisions must use current and historical data on land use to understand cumulative impacts on forest-dependent species and to fully protect biodiversity on national lands.

  2. Quantification of Gravel Rural Road Sediment Production

    NASA Astrophysics Data System (ADS)

    Silliman, B. A.; Myers Toman, E.

    2014-12-01

    Unbound rural roads are thought to be one of the largest anthropogenic sources of sediment reaching stream channels in small watersheds. This sediment deposition can reduce water quality in the streams negatively impacting aquatic habitat as well as impacting municipal drinking water sources. These roads are thought to see an increase in construction and use in southeast Ohio due to the expansion of shale gas development in the region. This study set out to quantify the amount of sediment these rural roads are able to produce. A controlled rain event of 12.7 millimeters of rain over a half hour period was used to drive sediment production over a 0.03 kilometer section of gravel rural road. These 8 segments varied in many characteristics and produced from 2.0 to 8.4 kilograms of sediment per 0.03 kilometers of road with the average production over the 8 segments being 5.5 kilograms of sediment. Sediment production was not strongly correlated with road segment slope but traffic was found to increase sediment production from 1.1 to 3.9 times as much sediment after traffic use. These results will help inform watershed scale sediment budgeting, and inform best management practices for road maintenance and construction. This study also adds to the understanding of the impacts of rural road use and construction associated with the changing land use from agricultural to natural gas extraction.

  3. Conservation story takes to the road. [Potomac Edison Co. of Allegheny Power System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-02-15

    Potomac Edison Co. personnel designed a compact mobile energy-conservation display that demonstrated energy conservation applications to industry, commerce, government, and educators; this van went on the road in December 1974. Among the displays in the vehicle were a working model of a liquid-heating tank that used floating plastic balls as a cover to conserve heat losses and evaporation, a microwave oven, types of insulation and their applications, and a demand controller designed to reduce consumer peak loads and demand charges. Other displays showed temperature and automatic time controls that could be used in locations unoccupied for various periods of timemore » and lighting applications that stressed use of the most efficient lamps and luminaires and emphasized equipment maintenance; a heat pump, a heat-recovery wheel, heat pipe, and model ''run-around system'' for recovering and reusing heat from various industrial processes were also included. (EAPA Ed. note: as of January 1976, plans were to refurbish, update, and put this van back on the road during the upcoming summer). (MCW)« less

  4. Detection and Classification of Motor Vehicle Noise in a Forested Landscape

    NASA Astrophysics Data System (ADS)

    Brown, Casey L.; Reed, Sarah E.; Dietz, Matthew S.; Fristrup, Kurt M.

    2013-11-01

    Noise emanating from human activity has become a common addition to natural soundscapes and has the potential to harm wildlife and erode human enjoyment of nature. In particular, motor vehicles traveling along roads and trails produce high levels of both chronic and intermittent noise, eliciting varied responses from a wide range of animal species. Anthropogenic noise is especially conspicuous in natural areas where ambient background sound levels are low. In this article, we present an acoustic method to detect and analyze motor vehicle noise. Our approach uses inexpensive consumer products to record sound, sound analysis software to automatically detect sound events within continuous recordings and measure their acoustic properties, and statistical classification methods to categorize sound events. We describe an application of this approach to detect motor vehicle noise on paved, gravel, and natural-surface roads, and off-road vehicle trails in 36 sites distributed throughout a national forest in the Sierra Nevada, CA, USA. These low-cost, unobtrusive methods can be used by scientists and managers to detect anthropogenic noise events for many potential applications, including ecological research, transportation and recreation planning, and natural resource management.

  5. Automated methods of tree boundary extraction and foliage transparency estimation from digital imagery

    Treesearch

    Sang-Mook Lee; Neil A. Clark; Philip A. Araman

    2003-01-01

    Foliage transparency in trees is an important indicator for forest health assessment. This paper helps advance transparency measurement research by presenting methods of automatic tree boundary extraction and foliage transparency estimation from digital images taken from the ground of open grown trees.Extraction of proper boundaries of tree crowns is the...

  6. Using data mining techniques to predict the severity of bicycle crashes.

    PubMed

    Prati, Gabriele; Pietrantoni, Luca; Fraboni, Federico

    2017-04-01

    To investigate the factors predicting severity of bicycle crashes in Italy, we used an observational study of official statistics. We applied two of the most widely used data mining techniques, CHAID decision tree technique and Bayesian network analysis. We used data provided by the Italian National Institute of Statistics on road crashes that occurred on the Italian road network during the period ranging from 2011 to 2013. In the present study, the dataset contains information about road crashes occurred on the Italian road network during the period ranging from 2011 to 2013. We extracted 49,621 road accidents where at least one cyclist was injured or killed from the original database that comprised a total of 575,093 road accidents. CHAID decision tree technique was employed to establish the relationship between severity of bicycle crashes and factors related to crash characteristics (type of collision and opponent vehicle), infrastructure characteristics (type of carriageway, road type, road signage, pavement type, and type of road segment), cyclists (gender and age), and environmental factors (time of the day, day of the week, month, pavement condition, and weather). CHAID analysis revealed that the most important predictors were, in decreasing order of importance, road type (0.30), crash type (0.24), age of cyclist (0.19), road signage (0.08), gender of cyclist (0.07), type of opponent vehicle (0.05), month (0.04), and type of road segment (0.02). These eight most important predictors of the severity of bicycle crashes were included as predictors of the target (i.e., severity of bicycle crashes) in Bayesian network analysis. Bayesian network analysis identified crash type (0.31), road type (0.19), and type of opponent vehicle (0.18) as the most important predictors of severity of bicycle crashes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    PubMed

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  9. Leaching of different elements from subbase layers of alternative aggregates in pavement constructions.

    PubMed

    Flyhammar, P; Bendz, D

    2006-09-01

    The objective of this study was to analyze the accumulated effects of leaching in two test roads were municipal solid waste incineration (MSWI) bottom ash and aggregate from a railway embankment, respectively, were used as subbase aggregates. Solid samples from the subbase and the subgrade were collected in trenches, which were excavated perpendicular to the road extension. The samples were analyzed with respect to pH, water content, electrical conductivity and extractable fractions of macro and trace constituents. To conclude, spatial distribution patterns of different constituents in subbase and subgrade layers confirms the existence of two major transport processes in a road with permeable shoulders: diffusion underneath surface asphalt layers driven by a concentration gradient directed horizontally towards the shoulder of the road where the dissolved elements are carried away by advection.

  10. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    PubMed

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  11. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions

    PubMed Central

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075

  12. Automatic diagnosis of malaria based on complete circle-ellipse fitting search algorithm.

    PubMed

    Sheikhhosseini, M; Rabbani, H; Zekri, M; Talebi, A

    2013-12-01

    Diagnosis of malaria parasitemia from blood smears is a subjective and time-consuming task for pathologists. The automatic diagnostic process will reduce the diagnostic time. Also, it can be worked as a second opinion for pathologists and may be useful in malaria screening. This study presents an automatic method for malaria diagnosis from thin blood smears. According to this fact that malaria life cycle is started by forming a ring around the parasite nucleus, the proposed approach is mainly based on curve fitting to detect parasite ring in the blood smear. The method is composed of six main phases: stain object extraction step, which extracts candidate objects that may be infected by malaria parasites. This phase includes stained pixel extraction step based on intensity and colour, and stained object segmentation by defining stained circle matching. Second step is preprocessing phase which makes use of nonlinear diffusion filtering. The process continues with detection of parasite nucleus from resulted image of previous step according to image intensity. Fourth step introduces a complete search process in which the circle search step identifies the direction and initial points for direct least-square ellipse fitting algorithm. Furthermore in the ellipse searching process, although parasite shape is completed undesired regions with high error value are removed and ellipse parameters are modified. Features are extracted from the parasite candidate region instead of whole candidate object in the fifth step. By employing this special feature extraction way, which is provided by special searching process, the necessity of employing clump splitting methods is removed. Also, defining stained circle matching process in the first step speeds up the whole procedure. Finally, a series of decision rules are applied on the extracted features to decide on the positivity or negativity of malaria parasite presence. The algorithm is applied on 26 digital images which are provided from thin blood smear films. The images are contained 1274 objects which may be infected by parasite or healthy. Applying the automatic identification of malaria on provided database showed a sensitivity of 82.28% and specificity of 98.02%. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  13. The impact of logging roads on dung beetle assemblages in a tropical rainforest reserve.

    PubMed

    Edwards, Felicity A; Finan, Jessica; Graham, Lucy K; Larsen, Trond H; Wilcove, David S; Hsu, Wayne W; Chey, V K; Hamer, Keith C

    2017-01-01

    The demand for timber products is facilitating the degradation and opening up of large areas of intact habitats rich in biodiversity. Logging creates an extensive network of access roads within the forest, yet these are commonly ignored or excluded when assessing impacts of logging on forest biodiversity. Here we determine the impact of these roads on the overall condition of selectively logged forests in Borneo, Southeast Asia. Focusing on dung beetles along > 40 km logging roads we determine: (i) the magnitude and extent of edge effects alongside logging roads; (ii) whether vegetation characteristics can explain patterns in dung beetle communities, and; (iii) how the inclusion of road edge forest impacts dung beetle assemblages within the overall logged landscape. We found that while vegetation structure was significantly affected up to 34 m from the road edge, impacts on dung beetle communities penetrated much further and were discernible up to 170 m into the forest interior. We found larger species and particularly tunnelling species responded more than other functional groups which were also influenced by micro-habitat variation. We provide important new insights into the long-term ecological impacts of tropical logging. We also support calls for improved logging road design both during and after timber extraction to conserve more effectively biodiversity in production forests, for instance, by considering the minimum volume of timber, per unit length of logging road needed to justify road construction. In particular, we suggest that governments and certification bodies need to highlight more clearly the biodiversity and environmental impacts of logging roads.

  14. Schlumberger soundings near Medicine Lake, California

    USGS Publications Warehouse

    Zohdy, A.A.R.; Bisdorf, R.J.

    1990-01-01

    The use of direct current resistivity soundings to explore the geothermal potential of the Medicine Lake area in northern California proved to be challenging because of high contact resistances and winding roads. Deep Schlumberger soundings were made by expanding current electrode spacings along the winding roads. Corrected sounding data were interpreted using an automatic interpretation method. Forty-two maps of interpreted resistivity were calculated for depths extending from 20 to 1000 m. Computer animation of these 42 maps revealed that: 1) certain subtle anomalies migrate laterallly with depth and can be traced to their origin, 2) an extensive volume of low-resistivity material underlies the survey area, and 3) the three areas (east of Bullseye Lake, southwest of Glass Mountain, and northwest of Medicine Lake) may be favorable geothermal targets. Six interpreted resistivity maps and three cross-sections illustrate the above findings. -from Authors

  15. Implementation Of Fuzzy Automated Brake Controller Using TSK Algorithm

    NASA Astrophysics Data System (ADS)

    Mittal, Ruchi; Kaur, Magandeep

    2010-11-01

    In this paper an application of Fuzzy Logic for Automatic Braking system is proposed. Anti-blocking system (ABS) brake controllers pose unique challenges to the designer: a) For optimal performance, the controller must operate at an unstable equilibrium point, b) Depending on road conditions, the maximum braking torque may vary over a wide range, c) The tire slippage measurement signal, crucial for controller performance, is both highly uncertain and noisy. A digital controller design was chosen which combines a fuzzy logic element and a decision logic network. The controller identifies the current road condition and generates a command braking pressure signal Depending upon the speed and distance of train. This paper describes design criteria, and the decision and rule structure of the control system. The simulation results present the system's performance depending upon the varying speed and distance of the train.

  16. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    PubMed

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  17. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics

    PubMed Central

    Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883

  18. Exploiting automatic on-line renewable molecularly imprinted solid-phase extraction in lab-on-valve format as front end to liquid chromatography: application to the determination of riboflavin in foodstuffs.

    PubMed

    Oliveira, Hugo M; Segundo, Marcela A; Lima, José L F C; Miró, Manuel; Cerdà, Victor

    2010-05-01

    In the present work, it is proposed, for the first time, an on-line automatic renewable molecularly imprinted solid-phase extraction (MISPE) protocol for sample preparation prior to liquid chromatographic analysis. The automatic microscale procedure was based on the bead injection (BI) concept under the lab-on-valve (LOV) format, using a multisyringe burette as propulsion unit for handling solutions and suspensions. A high precision on handling the suspensions containing irregularly shaped molecularly imprinted polymer (MIP) particles was attained, enabling the use of commercial MIP as renewable sorbent. The features of the proposed BI-LOV manifold also allowed a strict control of the different steps within the extraction protocol, which are essential for promoting selective interactions in the cavities of the MIP. By using this on-line method, it was possible to extract and quantify riboflavin from different foodstuff samples in the range between 0.450 and 5.00 mg L(-1) after processing 1,000 microL of sample (infant milk, pig liver extract, and energy drink) without any prior treatment. For milk samples, LOD and LOQ values were 0.05 and 0.17 mg L(-1), respectively. The method was successfully applied to the analysis of two certified reference materials (NIST 1846 and BCR 487) with high precision (RSD < 5.5%). Considering the downscale and simplification of the sample preparation protocol and the simultaneous performance of extraction and chromatographic assays, a cost-effective and enhanced throughput (six determinations per hour) methodology for determination of riboflavin in foodstuff samples is deployed here.

  19. Automatic Extraction of Destinations, Origins and Route Parts from Human Generated Route Directions

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Mitra, Prasenjit; Klippel, Alexander; Maceachren, Alan

    Researchers from the cognitive and spatial sciences are studying text descriptions of movement patterns in order to examine how humans communicate and understand spatial information. In particular, route directions offer a rich source of information on how cognitive systems conceptualize movement patterns by segmenting them into meaningful parts. Route directions are composed using a plethora of cognitive spatial organization principles: changing levels of granularity, hierarchical organization, incorporation of cognitively and perceptually salient elements, and so forth. Identifying such information in text documents automatically is crucial for enabling machine-understanding of human spatial language. The benefits are: a) creating opportunities for large-scale studies of human linguistic behavior; b) extracting and georeferencing salient entities (landmarks) that are used by human route direction providers; c) developing methods to translate route directions to sketches and maps; and d) enabling queries on large corpora of crawled/analyzed movement data. In this paper, we introduce our approach and implementations that bring us closer to the goal of automatically processing linguistic route directions. We report on research directed at one part of the larger problem, that is, extracting the three most critical parts of route directions and movement patterns in general: origin, destination, and route parts. We use machine-learning based algorithms to extract these parts of routes, including, for example, destination names and types. We prove the effectiveness of our approach in several experiments using hand-tagged corpora.

  20. Abbreviation definition identification based on automatic precision estimates.

    PubMed

    Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John

    2008-09-25

    The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.

  1. Earliest tea as evidence for one branch of the Silk Road across the Tibetan Plateau

    PubMed Central

    Lu, Houyuan; Zhang, Jianping; Yang, Yimin; Yang, Xiaoyan; Xu, Baiqing; Yang, Wuzhan; Tong, Tao; Jin, Shubo; Shen, Caiming; Rao, Huiyun; Li, Xingguo; Lu, Hongliang; Fuller, Dorian Q.; Wang, Luo; Wang, Can; Xu, Deke; Wu, Naiqin

    2016-01-01

    Phytoliths and biomolecular components extracted from ancient plant remains from Chang’an (Xi’an, the city where the Silk Road begins) and Ngari (Ali) in western Tibet, China, show that the tea was grown 2100 years ago to cater for the drinking habits of the Western Han Dynasty (207BCE-9CE), and then carried toward central Asia by ca.200CE, several hundred years earlier than previously recorded. The earliest physical evidence of tea from both the Chang’an and Ngari regions suggests that a branch of the Silk Road across the Tibetan Plateau, was established by the second to third century CE. PMID:26738699

  2. Study on the extraction method of tidal flat area in northern Jiangsu Province based on remote sensing waterlines

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanyuan; Gao, Zhiqiang; Liu, Xiangyang; Xu, Ning; Liu, Chaoshun; Gao, Wei

    2016-09-01

    Reclamation caused a significant dynamic change in the coastal zone, the tidal flat zone is an unstable reserve land resource, it has important significance for its research. In order to realize the efficient extraction of the tidal flat area information, this paper takes Rudong County in Jiangsu Province as the research area, using the HJ1A/1B images as the data source, on the basis of previous research experience and literature review, the paper chooses the method of object-oriented classification as a semi-automatic extraction method to generate waterlines. Then waterlines are analyzed by DSAS software to obtain tide points, automatic extraction of outer boundary points are followed under the use of Python to determine the extent of tidal flats in 2014 of Rudong County, the extraction area was 55182hm2, the confusion matrix is used to verify the accuracy and the result shows that the kappa coefficient is 0.945. The method could improve deficiencies of previous studies and its available free nature on the Internet makes a generalization.

  3. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  4. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  5. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  6. Automatic extraction of disease-specific features from Doppler images

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  7. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  8. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    PubMed

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  9. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    PubMed Central

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  10. [The automatic iris map overlap technology in computer-aided iridiagnosis].

    PubMed

    He, Jia-feng; Ye, Hu-nian; Ye, Miao-yuan

    2002-11-01

    In the paper, iridology and computer-aided iridiagnosis technologies are briefly introduced and the extraction method of the collarette contour is then investigated. The iris map can be overlapped on the original iris image based on collarette contour extraction. The research on collarette contour extraction and iris map overlap is of great importance to computer-aided iridiagnosis technologies.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Casey, Cameron P.; Zheng, Xueyun

    Motivation: Drift tube ion mobility spectrometry (DTIMS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS coupled with mass spectrometry and compute their associated collisional cross sections (CCS) we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of information necessary to create a reference library containing accu-rate masses, DTIMS arrival times and CCSs for use in high throughput omics analyses. Results: We demonstrate the utility of this approach bymore » automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were identical to those calculated by hand and within error of those calcu-lated using commercially available instrument vendor software.« less

  12. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    PubMed

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  13. Construction of Green Tide Monitoring System and Research on its Key Techniques

    NASA Astrophysics Data System (ADS)

    Xing, B.; Li, J.; Zhu, H.; Wei, P.; Zhao, Y.

    2018-04-01

    As a kind of marine natural disaster, Green Tide has been appearing every year along the Qingdao Coast, bringing great loss to this region, since the large-scale bloom in 2008. Therefore, it is of great value to obtain the real time dynamic information about green tide distribution. In this study, methods of optical remote sensing and microwave remote sensing are employed in Green Tide Monitoring Research. A specific remote sensing data processing flow and a green tide information extraction algorithm are designed, according to the optical and microwave data of different characteristics. In the aspect of green tide spatial distribution information extraction, an automatic extraction algorithm of green tide distribution boundaries is designed based on the principle of mathematical morphology dilation/erosion. And key issues in information extraction, including the division of green tide regions, the obtaining of basic distributions, the limitation of distribution boundary, and the elimination of islands, have been solved. The automatic generation of green tide distribution boundaries from the results of remote sensing information extraction is realized. Finally, a green tide monitoring system is built based on IDL/GIS secondary development in the integrated environment of RS and GIS, achieving the integration of RS monitoring and information extraction.

  14. A hybrid model based on neural networks for biomedical relation extraction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Zhang, Shaowu; Sun, Yuanyuan; Yang, Liang

    2018-05-01

    Biomedical relation extraction can automatically extract high-quality biomedical relations from biomedical texts, which is a vital step for the mining of biomedical knowledge hidden in the literature. Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are two major neural network models for biomedical relation extraction. Neural network-based methods for biomedical relation extraction typically focus on the sentence sequence and employ RNNs or CNNs to learn the latent features from sentence sequences separately. However, RNNs and CNNs have their own advantages for biomedical relation extraction. Combining RNNs and CNNs may improve biomedical relation extraction. In this paper, we present a hybrid model for the extraction of biomedical relations that combines RNNs and CNNs. First, the shortest dependency path (SDP) is generated based on the dependency graph of the candidate sentence. To make full use of the SDP, we divide the SDP into a dependency word sequence and a relation sequence. Then, RNNs and CNNs are employed to automatically learn the features from the sentence sequence and the dependency sequences, respectively. Finally, the output features of the RNNs and CNNs are combined to detect and extract biomedical relations. We evaluate our hybrid model using five public (protein-protein interaction) PPI corpora and a (drug-drug interaction) DDI corpus. The experimental results suggest that the advantages of RNNs and CNNs in biomedical relation extraction are complementary. Combining RNNs and CNNs can effectively boost biomedical relation extraction performance. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  16. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing.

    PubMed

    Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk

    2015-01-01

    Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.

  17. The automatic extraction of pitch perturbation using microcomputers: some methodological considerations.

    PubMed

    Deem, J F; Manning, W H; Knack, J V; Matesich, J S

    1989-09-01

    A program for the automatic extraction of jitter (PAEJ) was developed for the clinical measurement of pitch perturbations using a microcomputer. The program currently includes 12 implementations of an algorithm for marking the boundary criteria for a fundamental period of vocal fold vibration. The relative sensitivity of these extraction procedures for identifying the pitch period was compared using sine waves. Data obtained to date provide information for each procedure concerning the effects of waveform peakedness and slope, sample duration in cycles, noise level of the analysis system with both direct and tape recorded input, and the influence of interpolation. Zero crossing extraction procedures provided lower jitter values regardless of sine wave frequency or sample duration. The procedures making use of positive- or negative-going zero crossings with interpolation provided the lowest measures of jitter with the sine wave stimuli. Pilot data obtained with normal-speaking adults indicated that jitter measures varied as a function of the speaker, vowel, and sample duration.

  18. Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area

    NASA Astrophysics Data System (ADS)

    Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.

    2018-04-01

    Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.

  19. LexValueSets: An Approach for Context-Driven Value Sets Extraction

    PubMed Central

    Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.

    2008-01-01

    The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955

  20. Fast modal extraction in NASTRAN via the FEER computer program. [based on automatic matrix reduction method for lower modes of structures with many degrees of freedom

    NASA Technical Reports Server (NTRS)

    Newman, M. B.; Pipano, A.

    1973-01-01

    A new eigensolution routine, FEER (Fast Eigensolution Extraction Routine), used in conjunction with NASTRAN at Israel Aircraft Industries is described. The FEER program is based on an automatic matrix reduction scheme whereby the lower modes of structures with many degrees of freedom can be accurately extracted from a tridiagonal eigenvalue problem whose size is of the same order of magnitude as the number of required modes. The process is effected without arbitrary lumping of masses at selected node points or selection of nodes to be retained in the analysis set. The results of computational efficiency studies are presented, showing major arithmetic operation counts and actual computer run times of FEER as compared to other methods of eigenvalue extraction, including those available in the NASTRAN READ module. It is concluded that the tridiagonal reduction method used in FEER would serve as a valuable addition to NASTRAN for highly increased efficiency in obtaining structural vibration modes.

  1. Fractal Dimension Change Point Model for Hydrothermal Alteration Anomalies in Silk Road Economic Belt, the Beishan Area, Gansu, China

    NASA Astrophysics Data System (ADS)

    Han, H. H.; Wang, Y. L.; Ren, G. L.; LI, J. Q.; Gao, T.; Yang, M.; Yang, J. L.

    2016-11-01

    Remote sensing plays an important role in mineral exploration of “One Belt One Road” plan. One of its applications is extracting and locating hydrothermal alteration zones that are related to mines. At present, the extracting method for alteration anomalies from principal component image mainly relies on the data's normal distribution, without considering the nonlinear characteristics of geological anomaly. In this study, a Fractal Dimension Change Point Model (FDCPM), calculated by the self-similarity and mutability of alteration anomalies, is employed to quantitatively acquire the critical threshold of alteration anomalies. The realization theory and access mechanism of the model are elaborated by an experiment with ASTER data in Beishan mineralization belt, also the results are compared with traditional method (De-Interfered Anomalous Principal Component Thresholding Technique, DIAPCTT). The results show that the findings produced by FDCPM are agree with well with a mounting body of evidence from different perspectives, with the extracting accuracy over 80%, indicating that FDCPM is an effective extracting method for remote sensing alteration anomalies, and could be used as an useful tool for mineral exploration in similar areas in Silk Road Economic Belt.

  2. Residential proximity to major roads and placenta/birth weight ratio.

    PubMed

    Yorifuji, Takashi; Naruse, Hiroo; Kashima, Saori; Murakoshi, Takeshi; Tsuda, Toshihide; Doi, Hiroyuki; Kawachi, Ichiro

    2012-01-01

    Exposure to air pollution has been demonstrated to increase the risk of preterm birth and low birth weight. We examined whether proximity to major roads (as a marker of exposure to air pollution) is associated with increased placenta/birth weight ratio (as a biomarker of the placental transport function). Data on parental characteristics and birth outcomes were extracted from the database maintained by a major hospital in Shizuoka Prefecture, Japan. We restricted the analysis to mothers who delivered liveborn single births from 1997 to 2008 (n = 14,189). Using geocoded residential information, each birth was classified according to proximity to major roads. We examined the association between proximity to major roads and the placenta/birth weight ratio, using multiple linear regression. Proximity to major roads was associated with higher placenta/birth weight ratio. After adjusting for potential confounders, living within 200 m of a major road increased the ratio by 0.48% (95% CI = 0.15 to 0. 80). In addition, proximity to major roads was associated with lower placenta weight and birth weight. These observed associations were stronger among participants living closer to major roads. Exposure to traffic-related air pollution is associated with higher placenta/birth weight ratio. Impaired placental oxygen and nutrient transport function might be a mechanism for explaining the observed association between air pollution and low birth weight as well as preterm birth. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. [Terahertz Spectroscopic Identification with Deep Belief Network].

    PubMed

    Ma, Shuai; Shen, Tao; Wang, Rui-qi; Lai, Hua; Yu, Zheng-tao

    2015-12-01

    Feature extraction and classification are the key issues of terahertz spectroscopy identification. Because many materials have no apparent absorption peaks in the terahertz band, it is difficult to extract theirs terahertz spectroscopy feature and identify. To this end, a novel of identify terahertz spectroscopy approach with Deep Belief Network (DBN) was studied in this paper, which combines the advantages of DBN and K-Nearest Neighbors (KNN) classifier. Firstly, cubic spline interpolation and S-G filter were used to normalize the eight kinds of substances (ATP, Acetylcholine Bromide, Bifenthrin, Buprofezin, Carbazole, Bleomycin, Buckminster and Cylotriphosphazene) terahertz transmission spectra in the range of 0.9-6 THz. Secondly, the DBN model was built by two restricted Boltzmann machine (RBM) and then trained layer by layer using unsupervised approach. Instead of using handmade features, the DBN was employed to learn suitable features automatically with raw input data. Finally, a KNN classifier was applied to identify the terahertz spectrum. Experimental results show that using the feature learned by DBN can identify the terahertz spectrum of different substances with the recognition rate of over 90%, which demonstrates that the proposed method can automatically extract the effective features of terahertz spectrum. Furthermore, this KNN classifier was compared with others (BP neural network, SOM neural network and RBF neural network). Comparisons showed that the recognition rate of KNN classifier is better than the other three classifiers. Using the approach that automatic extract terahertz spectrum features by DBN can greatly reduce the workload of feature extraction. This proposed method shows a promising future in the application of identifying the mass terahertz spectroscopy.

  4. Automatic definition of the oncologic EHR data elements from NCIT in OWL.

    PubMed

    Cuggia, Marc; Bourdé, Annabel; Turlin, Bruno; Vincendeau, Sebastien; Bertaud, Valerie; Bohec, Catherine; Duvauferrier, Régis

    2011-01-01

    Semantic interoperability based on ontologies allows systems to combine their information and process them automatically. The ability to extract meaningful fragments from ontology is a key for the ontology re-use and the construction of a subset will help to structure clinical data entries. The aim of this work is to provide a method for extracting a set of concepts for a specific domain, in order to help to define data elements of an oncologic EHR. a generic extraction algorithm was developed to extract, from the NCIT and for a specific disease (i.e. prostate neoplasm), all the concepts of interest into a sub-ontology. We compared all the concepts extracted to the concepts encoded manually contained into the multi-disciplinary meeting report form (MDMRF). We extracted two sub-ontologies: sub-ontology 1 by using a single key concept and sub-ontology 2 by using 5 additional keywords. The coverage of sub-ontology 2 to the MDMRF concepts was 51%. The low rate of coverage is due to the lack of definition or mis-classification of the NCIT concepts. By providing a subset of concepts focused on a particular domain, this extraction method helps at optimizing the binding process of data elements and at maintaining and enriching a domain ontology.

  5. Reconciling certification and intact forest landscape conservation.

    PubMed

    Kleinschroth, Fritz; Garcia, Claude; Ghazoul, Jaboury

    2018-05-29

    In 2014, the Forest Stewardship Council (FSC) added a new criterion to its principles that requires protection of intact forest landscapes (IFLs). An IFL is an extensive area of forest that lacks roads and other signs of human activity as detected through remote sensing. In the Congo basin, our analysis of road networks in formally approved concessionary logging areas revealed greater loss of IFL in certified than in noncertified concessions. In areas of informal (i.e., nonregulated) extraction, road networks are known to be less detectable by remote sensing. Under the current definition of IFL, companies certified under FSC standards are likely to be penalized relative to the noncertified as well as the informal logging sector on account of their planned road networks, despite an otherwise better standard of forest management. This could ultimately undermine certification and its wider adoption, with implications for the future of sustainable forest management.

  6. ESARR: enhanced situational awareness via road sign recognition

    NASA Astrophysics Data System (ADS)

    Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.

    2010-04-01

    The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.

  7. Estimation of road profile variability from measured vehicle responses

    NASA Astrophysics Data System (ADS)

    Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.

    2016-05-01

    When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.

  8. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  9. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  10. Review of automatic detection of pig behaviours by using image analysis

    NASA Astrophysics Data System (ADS)

    Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao

    2017-06-01

    Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.

  11. Understanding fatal older road user crash circumstances and risk factors.

    PubMed

    Koppel, Sjaan; Bugeja, Lyndal; Smith, Daisy; Lamb, Ashne; Dwyer, Jeremy; Fitzharris, Michael; Newstead, Stuart; D'Elia, Angelo; Charlton, Judith

    2018-02-28

    This study used medicolegal data to investigate fatal older road user (ORU) crash circumstances and risk factors relating to four key components of the Safe System approach (e.g., roads and roadsides, vehicles, road users, and speeds) to identify areas of priority for targeted prevention activity. The Coroners Court of Victoria's Surveillance Database was searched to identify coronial records with at least one deceased ORU in the state of Victoria, Australia, for 2013-2014. Information relating to the ORU, crash characteristics and circumstances, and risk factors was extracted and analyzed. The average rate of fatal ORU crashes per 100,000 population was 8.1 (95% confidence interval [CI] 6.0-10.2), which was more than double the average rate of fatal middle-aged road user crashes (3.6, 95% CI 2.5-4.6). There was a significant relationship between age group and deceased road user type (χ 2 (15, N = 226) = 3.56, p < 0.001). The proportion of deceased drivers decreased with age, whereas the proportion of deceased pedestrians increased with age. The majority of fatal ORU crashes involved a counterpart (another vehicle: 59.4%; fixed/stationary object: 25.4%), and occurred "on road" (87.0%), on roads that were paved (94.2%), dry (74.2%), and had light traffic volume (38.3%). Road user error was identified by the police and/or coroner for the majority of fatal ORU crashes (57.9%), with a significant proportion of deceased ORU deemed to have "misjudged" (40.9%) or "failed to yield" (37.9%). Road user error was the most significant risk factor identified in fatal ORU crashes, which suggests that there is a limited capacity of the Victorian road system to fully accommodate road user errors. Initiatives related to safer roads and roadsides, vehicles, and speed zones, as well as behavioral approaches, are key areas of priority for targeted activity to prevent fatal older road user crashes in the future.

  12. Vehicle classification using mobile sensors.

    DOT National Transportation Integrated Search

    2013-04-01

    In this research, the feasibility of using mobile traffic sensors for binary vehicle classification on arterial roads is investigated. Features (e.g. : speed related, acceleration/deceleration related, etc.) are extracted from vehicle traces (passeng...

  13. Road networks as collections of minimum cost paths

    NASA Astrophysics Data System (ADS)

    Wegner, Jan Dirk; Montoya-Zegarra, Javier Alexander; Schindler, Konrad

    2015-10-01

    We present a probabilistic representation of network structures in images. Our target application is the extraction of urban roads from aerial images. Roads appear as thin, elongated, partially curved structures forming a loopy graph, and this complex layout requires a prior that goes beyond standard smoothness and co-occurrence assumptions. In the proposed model the network is represented as a union of 1D paths connecting distant (super-)pixels. A large set of putative candidate paths is constructed in such a way that they include the true network as much as possible, by searching for minimum cost paths in the foreground (road) likelihood. Selecting the optimal subset of candidate paths is posed as MAP inference in a higher-order conditional random field. Each path forms a higher-order clique with a type of clique potential, which attracts the member nodes of cliques with high cumulative road evidence to the foreground label. That formulation induces a robust PN -Potts model, for which a global MAP solution can be found efficiently with graph cuts. Experiments with two road data sets show that the proposed model significantly improves per-pixel accuracies as well as the overall topological network quality with respect to several baselines.

  14. Calibrated Multi-Temporal Edge Images for City Infrastructure Growth Assessment and Prediction

    NASA Astrophysics Data System (ADS)

    Al-Ruzouq, R.; Shanableh, A.; Boharoon, Z.; Khalil, M.

    2018-03-01

    Urban Growth or urbanization can be defined as the gradual process of city's population growth and infrastructure development. It is typically demonstrated by the expansion of a city's infrastructure, mainly development of its roads and buildings. Uncontrolled urban Growth in cities has been responsible for several problems that include living environment, drinking water, noise and air pollution, waste management, traffic congestion and hydraulic processes. Accurate identification of urban growth is of great importance for urban planning and water/land management. Recent advances in satellite imagery, in terms of improved spatial and temporal resolutions, allows for efficient identification of change patterns and the prediction of built-up areas. In this study, two approaches were adapted to quantify and assess the pattern of urbanization, in Ajman City at UAE, during the last three decades. The first approach relies on image processing techniques and multi-temporal Landsat satellite images with ground resolution varying between 15 to 60 meters. In this approach, the derived edge images (roads and buildings) were used as the basis of change detection. The second approach relies on digitizing features from high-resolution images captured at different years. The latest approach was adopted, as a reference and ground truth, to calibrate extracted edges from Landsat images. It has been found that urbanized area almost increased by 12 folds during the period 1975-2015 where the growth of buildings and roads were almost parallel until 2005 when the roads spatial expansion witnessed a steep increase due to the vertical expansion of the City. Extracted Edges features, were successfully used for change detection and quantification in term of buildings and roads.

  15. Vertical-Control Subsystem for Automatic Coal Mining

    NASA Technical Reports Server (NTRS)

    Griffiths, W. R.; Smirlock, M.; Aplin, J.; Fish, R. B.; Fish, D.

    1984-01-01

    Guidance and control system automatically positions cutting drums of double-ended longwall shearer so they follow coal seam. System determines location of upper interface between coal and shale and continuously adjusts cutting-drum positions, upward or downward, to track undulating interface. Objective to keep cutting edges as close as practicable to interface and thus extract as much coal as possible from seam.

  16. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    PubMed

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images

    PubMed Central

    Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.

    2010-01-01

    High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043

  18. A CityGML extension for traffic-sign objects that guides the automatic processing of data collected using Mobile Mapping technology

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; Riveiro, B.; Arias-Sánchez, P.; González-Jorge, H.; Martínez-Sánchez, J.

    2014-11-01

    The rapid evolution of integral schemes, accounting for geometric and semantic data, has been importantly motivated by the advances in the last decade in mobile laser scanning technology; automation in data processing has also recently influenced the expansion of the new model concepts. This paper reviews some important issues involved in the new paradigms of city 3D modelling: an interoperable schema for city 3D modelling (cityGML) and mobile mapping technology to provide the features that composing the city model. This paper focuses in traffic signs, discussing their characterization using cityGML in order to ease the implementation of LiDAR technology in road management software, as well as analysing some limitations of the current technology in the labour of automatic detection and classification.

  19. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    PubMed Central

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  20. Robust perception algorithms for road and track autonomous following

    NASA Astrophysics Data System (ADS)

    Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge

    2004-09-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).

  1. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    PubMed

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  2. Mutation detection for inventories of traffic signs from street-level panoramic images

    NASA Astrophysics Data System (ADS)

    Hazelhoff, Lykele; Creusen, Ivo; De With, Peter H. N.

    2014-03-01

    Road safety is positively influenced by both adequate placement and optimal visibility of traffic signs. As their visibility degrades over time due to e.g. aging, vandalism, accidents and vegetation coverage, up-to-date inven­tories of traffic signs are highly attractive for preserving a high road safety. These inventories are performed in a semi-automatic fashion from street-level panoramic images, exploiting object detection and classification tech­niques. Next to performing inventories from scratch, these systems are also exploited for the efficient retrieval of situation changes by comparing the outcome of the automated system to a baseline inventory (e.g. performed in a previous year). This allows for specific manual interactions to the found changes, while skipping all unchanged situations, thereby resulting in a large efficiency gain. This work describes such a mutation detection approach, with special attention to re-identifying previously found signs. Preliminary results on a geographical area con­taining about 425 km of road show that 91.3% of the unchanged signs are re-identified, while the amount of found differences equals about 35% of the number of baseline signs. From these differences, about 50% correspond to physically changed traffic signs, next to false detections, misclassifications and missed signs. As a bonus, our approach directly results in the changed situations, which is beneficial for road sign maintenance.

  3. An on-the-road experiment into the thermal comfort of car seats.

    PubMed

    Cengiz, Tülin Gündüz; Babalik, Fatih C

    2007-05-01

    This paper presents an evaluation of thermal comfort in an extended road trial study. Automobile seats play an important role in improving the thermal comfort. In the assessment of thermal comfort in autos, in general subjective and objective measurements are used. Testing on the road is very difficult but real traffic conditions affect the comfort level directly, as well as the driver's experience to real conditions. Thus, for such cases real traffic situations should not be neglected in the evaluation of comfort. The aim of this study was to carry out, on an extended road trial study, an evaluation of thermal comfort using human subjects. In the experiments used, the 100% polyester seat cover had three different cover materials, which were velvet, jacquard and micro fiber. All experiments were carried out on a sunny day with ten participants over 1h. They were carried out at air temperatures of 25 degrees C in a Fiat Marea 2004, which had an automatic climate function. Skin temperature at eight points and skin wettedness at two points on the human body were measured during the trials. Participants were required to complete a questionnaire of 15 questions, every 5 min. It can be concluded that there was negligible difference in participants' reported thermal sensation between the three seats. According to objective measurement results, all seat cover materials have the same degree of thermal comfort. On the road the participants feel warmer around their waist than any other area of the body. It was suggested that the effects of real traffic conditions must be accounted for in comfort predictions.

  4. Phases and interfaces from real space atomically resolved data: Physics-based deep data image analysis

    DOE PAGES

    Vasudevan, Rama K.; Ziatdinov, Maxim; Jesse, Stephen; ...

    2016-08-12

    Advances in electron and scanning probe microscopies have led to a wealth of atomically resolved structural and electronic data, often with ~1–10 pm precision. However, knowledge generation from such data requires the development of a physics-based robust framework to link the observed structures to macroscopic chemical and physical descriptors, including single phase regions, order parameter fields, interfaces, and structural and topological defects. Here, we develop an approach based on a synergy of sliding window Fourier transform to capture the local analog of traditional structure factors combined with blind linear unmixing of the resultant 4D data set. This deep data analysismore » is ideally matched to the underlying physics of the problem and allows reconstruction of the a priori unknown structure factors of individual components and their spatial localization. We demonstrate the principles of this approach using a synthetic data set and further apply it for extracting chemical and physically relevant information from electron and scanning tunneling microscopy data. Furthermore, this method promises to dramatically speed up crystallographic analysis in atomically resolved data, paving the road toward automatic local structure–property determinations in crystalline and quasi-ordered systems, as well as systems with competing structural and electronic order parameters.« less

  5. Large microplastic particles in sediments of tributaries of the River Thames, UK - Abundance, sources and methods for effective quantification.

    PubMed

    Horton, Alice A; Svendsen, Claus; Williams, Richard J; Spurgeon, David J; Lahive, Elma

    2017-01-15

    Sewage effluent input and population were chosen as predictors of microplastic presence in sediments at four sites in the River Thames basin (UK). Large microplastic particles (1mm-4mm) were extracted using a stepwise approach to include visual extraction, flotation and identification using Raman spectroscopy. Microplastics were found at all four sites. One site had significantly higher numbers of microplastics than other sites, average 66 particles 100g -1 , 91% of which were fragments. This site was downstream of a storm drain outfall receiving urban runoff; many of the fragments at this site were determined to be derived of thermoplastic road-surface marking paints. At the remaining three sites, fibres were the dominant particle type. The most common polymers identified included polypropylene, polyester and polyarylsulphone. This study describes two major new findings: presence of microplastic particles in a UK freshwater system and identification of road marking paints as a source of microplastics. This study is the first to quantify microplastics of any size in river sediments in the UK and links their presence to terrestrial sources including sewage and road marking paints. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. Doppler extraction with a digital VCO

    NASA Technical Reports Server (NTRS)

    Starner, E. R.; Nossen, E. J.

    1977-01-01

    Digitally controlled oscillator in phased-locked loop may be useful for data communications systems, or may be modified to serve as information extraction component of microwave or optical system for collision avoidance or automatic braking. Instrument is frequency-synthesizing device with output specified precisely by digital number programmed into frequency register.

  7. Variogram-based feature extraction for neural network recognition of logos

    NASA Astrophysics Data System (ADS)

    Pham, Tuan D.

    2003-03-01

    This paper presents a new approach for extracting spatial features of images based on the theory of regionalized variables. These features can be effectively used for automatic recognition of logo images using neural networks. Experimental results on a public-domain logo database show the effectiveness of the proposed approach.

  8. Pattern-Based Extraction of Argumentation from the Scientific Literature

    ERIC Educational Resources Information Center

    White, Elizabeth K.

    2010-01-01

    As the number of publications in the biomedical field continues its exponential increase, techniques for automatically summarizing information from this body of literature have become more diverse. In addition, the targets of summarization have become more subtle; initial work focused on extracting the factual assertions from full-text papers,…

  9. Automatic segmentation of mandible in panoramic x-ray.

    PubMed

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-10-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.

  10. Automatic identification of land uses from ERTS-1 data obtained over Milwaukee, Wisconsin

    NASA Technical Reports Server (NTRS)

    Baumgardner, M. F.; Landgrebe, D. A. (Principal Investigator); Kramer, H. H.

    1972-01-01

    The author has identified the following significant results. Spectrally, thirteen classes of ground cover were identified within Milwaukee County: five classes of water, grassy open areas, beach, two classes of road, woods, suburban, inner city, and industry. A distinct concentric pattern of land use was identified in the county radiating outward from the central business district. The first ring has a principal feature, termed the inner city, which is indicative of the older part of the county. In the second ring, the land use becomes more complex, consisting of suburban areas, parks, and varied institutional features. The third general ring consists primarily of open, grassy land, with scattered residential subdivisions, wood lots, and small water bodies. The five classes of water identified suggest differences in depth, turbidity, and/or color. A number of major roads were identified. Other spectrally identifiable features included the larger county parks and larger cemeteries.

  11. New research opportunities for roadside safety barriers improvement

    NASA Astrophysics Data System (ADS)

    Cantisani, Giuseppe; Di Mascio, Paola; Polidori, Carlo

    2017-09-01

    Among the major topics regarding the protection of roads, restraint systems still represent a big opportunity in order to increase safety performances. When accidents happen, in fact, the infrastructure can substantially contribute to the reduction of consequences if its marginal spaces are well designed and/or effective restraint systems are installed there. Nevertheless, basic concepts and technology of road safety barriers have not significantly changed for the last two decades. The paper proposes a new approach to the study aimed to define possible enhancements of restraint safety systems performances, by using new materials and defining innovative design principles. In particular, roadside systems can be developed with regard to vehicle-barrier interaction, vehicle-oriented design (included low-mass and extremely low-mass vehicles), traffic suitability, user protection, working width reduction. In addition, thanks to sensors embedded into the barriers, it is also expected to deal with new challenges related to the guidance of automatic vehicles and I2V communication.

  12. Driving Control for Electric Power Assisted Wheelchair Based on Regenerative Brake

    NASA Astrophysics Data System (ADS)

    Seki, Hirokazu; Takahashi, Kazuki; Tadakuma, Susumu

    This paper describes a novel safety driving control scheme for electric power assisted wheelchairs based on the regenerative braking system. “Electric power assisted wheelchair” which assists the driving force by electric motors is expected to be widely used as a mobility support system for elderly people and disabled people, however, the safe and secure driving performance especially on downhill roads must be further improved because electric power assisted wheelchairs have no braking devices. The proposed control system automatically switches the driving mode, from “assisting mode” to “braking mode”, based on the wheelchair's velocity and the declined angle and smoothly suppresses the wheelchair's acceleration based on variable duty ratio control in order to realize the safety driving and to improve the ride quality. Some experiments on the practical roads and subjective evaluation show the effectiveness of the proposed control system.

  13. With Geospatial in Path of Smart City

    NASA Astrophysics Data System (ADS)

    Homainejad, A. S.

    2015-04-01

    With growth of urbanisation, there is a requirement for using the leverage of smart city in city management. The core of smart city is Information and Communication Technologies (ICT), and one of its elements is smart transport which includes sustainable transport and Intelligent Transport Systems (ITS). Cities and especially megacities are facing urgent transport challenge in traffic management. Geospatial can provide reliable tools for monitoring and coordinating traffic. In this paper a method for monitoring and managing the ongoing traffic in roads using aerial images and CCTV will be addressed. In this method, the road network was initially extracted and geo-referenced and captured in a 3D model. The aim is to detect and geo-referenced any vehicles on the road from images in order to assess the density and the volume of vehicles on the roads. If a traffic jam was recognised from the images, an alternative route would be suggested for easing the traffic jam. In a separate test, a road network was replicated in the computer and a simulated traffic was implemented in order to assess the traffic management during a pick time using this method.

  14. A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment

    PubMed Central

    Liu, Jian; Liang, Huawei; Wang, Zhiling; Chen, Xiangcheng

    2015-01-01

    The quick and accurate understanding of the ambient environment, which is composed of road curbs, vehicles, pedestrians, etc., is critical for developing intelligent vehicles. The road elements included in this work are road curbs and dynamic road obstacles that directly affect the drivable area. A framework for the online modeling of the driving environment using a multi-beam LIDAR, i.e., a Velodyne HDL-64E LIDAR, which describes the 3D environment in the form of a point cloud, is reported in this article. First, ground segmentation is performed via multi-feature extraction of the raw data grabbed by the Velodyne LIDAR to satisfy the requirement of online environment modeling. Curbs and dynamic road obstacles are detected and tracked in different manners. Curves are fitted for curb points, and points are clustered into bundles whose form and kinematics parameters are calculated. The Kalman filter is used to track dynamic obstacles, whereas the snake model is employed for curbs. Results indicate that the proposed framework is robust under various environments and satisfies the requirements for online processing. PMID:26404290

  15. Road-Aided Ground Slowly Moving Target 2D Motion Estimation for Single-Channel Synthetic Aperture Radar.

    PubMed

    Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian

    2016-03-16

    To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.

  16. Spectral-analysis-based extraction of land disturbances arising from oil and gas development in diverse landscapes

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Lantz, Nicholas; Guindon, Bert; Jiao, Xianfen

    2017-01-01

    Accurate and frequent monitoring of land surface changes arising from oil and gas exploration and extraction is a key requirement for the responsible and sustainable development of these resources. Petroleum deposits typically extend over large geographic regions but much of the infrastructure required for oil and gas recovery takes the form of numerous small-scale features (e.g., well sites, access roads, etc.) scattered over the landscape. Increasing exploitation of oil and gas deposits will increase the presence of these disturbances in heavily populated regions. An object-based approach is proposed to utilize RapidEye satellite imagery to delineate well sites and related access roads in diverse complex landscapes, where land surface changes also arise from other human activities, such as forest logging and agriculture. A simplified object-based change vector approach, adaptable to operational use, is introduced to identify the disturbances on land based on red-green spectral response and spatial attributes of candidate object size and proximity to roads. Testing of the techniques has been undertaken with RapidEye multitemporal imagery in two test sites located at Alberta, Canada: one was a predominant natural forest landscape and the other landscape dominated by intensive agricultural activities. Accuracies of 84% and 73%, respectively, have been achieved for the identification of well site and access road infrastructure of the two sites based on fully automated processing. Limited manual relabeling of selected image segments can improve these accuracies to 95%.

  17. Automated Assessment of Child Vocalization Development Using LENA.

    PubMed

    Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-07-12

    To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.

  18. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  19. Detection of exudates in fundus images using a Markovian segmentation model.

    PubMed

    Harangi, Balazs; Hajdu, Andras

    2014-01-01

    Diabetic retinopathy (DR) is one of the most common causing of vision loss in developed countries. In early stage of DR, some signs like exudates appear in the retinal images. An automatic screening system must be capable to detect these signs properly so that the treatment of the patients may begin in time. The appearance of exudates shows a rich variety regarding their shape and size making automatic detection more challenging. We propose a way for the automatic segmentation of exudates consisting of a candidate extraction step followed by exact contour detection and region-wise classification. More specifically, we extract possible exudate candidates using grayscale morphology and their proper shape is determined by a Markovian segmentation model considering edge information. Finally, we label the candidates as true or false ones by an optimally adjusted SVM classifier. For testing purposes, we considered the publicly available database DiaretDB1, where the proposed method outperformed several state-of-the-art exudate detectors.

  20. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  1. Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.

    PubMed

    Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu

    2015-01-01

    Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.

  2. Note: An automated image analysis method for high-throughput classification of surface-bound bacterial cell motions.

    PubMed

    Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng

    2015-12-01

    We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.

  3. Kinase Pathway Database: An Integrated Protein-Kinase and NLP-Based Protein-Interaction Resource

    PubMed Central

    Koike, Asako; Kobayashi, Yoshiyuki; Takagi, Toshihisa

    2003-01-01

    Protein kinases play a crucial role in the regulation of cellular functions. Various kinds of information about these molecules are important for understanding signaling pathways and organism characteristics. We have developed the Kinase Pathway Database, an integrated database involving major completely sequenced eukaryotes. It contains the classification of protein kinases and their functional conservation, ortholog tables among species, protein–protein, protein–gene, and protein–compound interaction data, domain information, and structural information. It also provides an automatic pathway graphic image interface. The protein, gene, and compound interactions are automatically extracted from abstracts for all genes and proteins by natural-language processing (NLP).The method of automatic extraction uses phrase patterns and the GENA protein, gene, and compound name dictionary, which was developed by our group. With this database, pathways are easily compared among species using data with more than 47,000 protein interactions and protein kinase ortholog tables. The database is available for querying and browsing at http://kinasedb.ontology.ims.u-tokyo.ac.jp/. PMID:12799355

  4. Unsupervised Extraction of Diagnosis Codes from EMRs Using Knowledge-Based and Extractive Text Summarization Techniques

    PubMed Central

    Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel

    2017-01-01

    Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227

  5. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  6. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  7. 17. ROOM 32, SHOWING THE ORIGINAL LOCATION OF THE MASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. ROOM 32, SHOWING THE ORIGINAL LOCATION OF THE MASS SPECTROMETER AND EXTRACTION LINES, LOOKING SOUTH. - U.S. Geological Survey, Rock Magnetics Laboratory, 345 Middlefield Road, Menlo Park, San Mateo County, CA

  8. Computerized Interpretation of Dynamic Breast MRI

    DTIC Science & Technology

    2006-05-01

    correction, tumor segmentation , extraction of computerized features that help distinguish between benign and malignant lesions, and classification. Our...for assessing tumor extent in 3D. The primary feature used for 3D tumor segmentation is the postcontrast enhancement vector. Tumor segmentation is a...Appendix B. 4. Investigation of methods for automatic tumor segmentation We developed an automatic method for assessing tumor extent in 3D. The

  9. An automatic method for retrieving and indexing catalogues of biomedical courses.

    PubMed

    Maojo, Victor; de la Calle, Guillermo; García-Remesal, Miguel; Bankauskaite, Vaida; Crespo, Jose

    2008-11-06

    Although there is wide information about Biomedical Informatics education and courses in different Websites, information is usually not exhaustive and difficult to update. We propose a new methodology based on information retrieval techniques for extracting, indexing and retrieving automatically information about educational offers. A web application has been developed to make available such information in an inventory of courses and educational offers.

  10. Extraction of latent images from printed media

    NASA Astrophysics Data System (ADS)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  11. The impact of roads on the demography of grizzly bears in Alberta.

    PubMed

    Boulanger, John; Stenhouse, Gordon B

    2014-01-01

    One of the principal factors that have reduced grizzly bear populations has been the creation of human access into grizzly bear habitat by roads built for resource extraction. Past studies have documented mortality and distributional changes of bears relative to roads but none have attempted to estimate the direct demographic impact of roads in terms of both survival rates, reproductive rates, and the interaction of reproductive state of female bears with survival rate. We applied a combination of survival and reproductive models to estimate demographic parameters for threatened grizzly bear populations in Alberta. Instead of attempting to estimate mean trend we explored factors which caused biological and spatial variation in population trend. We found that sex and age class survival was related to road density with subadult bears being most vulnerable to road-based mortality. A multi-state reproduction model found that females accompanied by cubs of the year and/or yearling cubs had lower survival rates compared to females with two year olds or no cubs. A demographic model found strong spatial gradients in population trend based upon road density. Threshold road densities needed to ensure population stability were estimated to further refine targets for population recovery of grizzly bears in Alberta. Models that considered lowered survival of females with dependant offspring resulted in lower road density thresholds to ensure stable bear populations. Our results demonstrate likely spatial variation in population trend and provide an example how demographic analysis can be used to refine and direct conservation measures for threatened species.

  12. The Impact of Roads on the Demography of Grizzly Bears in Alberta

    PubMed Central

    2014-01-01

    One of the principal factors that have reduced grizzly bear populations has been the creation of human access into grizzly bear habitat by roads built for resource extraction. Past studies have documented mortality and distributional changes of bears relative to roads but none have attempted to estimate the direct demographic impact of roads in terms of both survival rates, reproductive rates, and the interaction of reproductive state of female bears with survival rate. We applied a combination of survival and reproductive models to estimate demographic parameters for threatened grizzly bear populations in Alberta. Instead of attempting to estimate mean trend we explored factors which caused biological and spatial variation in population trend. We found that sex and age class survival was related to road density with subadult bears being most vulnerable to road-based mortality. A multi-state reproduction model found that females accompanied by cubs of the year and/or yearling cubs had lower survival rates compared to females with two year olds or no cubs. A demographic model found strong spatial gradients in population trend based upon road density. Threshold road densities needed to ensure population stability were estimated to further refine targets for population recovery of grizzly bears in Alberta. Models that considered lowered survival of females with dependant offspring resulted in lower road density thresholds to ensure stable bear populations. Our results demonstrate likely spatial variation in population trend and provide an example how demographic analysis can be used to refine and direct conservation measures for threatened species. PMID:25532035

  13. Automatic extraction of three-dimensional thoracic aorta geometric model from phase contrast MRI for morphometric and hemodynamic characterization.

    PubMed

    Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna

    2016-02-01

    To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.

  14. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  15. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  16. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  17. Fast title extraction method for business documents

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Naoi, Satoshi

    1997-04-01

    Conventional electronic document filing systems are inconvenient because the user must specify the keywords in each document for later searches. To solve this problem, automatic keyword extraction methods using natural language processing and character recognition have been developed. However, these methods are slow, especially for japanese documents. To develop a practical electronic document filing system, we focused on the extraction of keyword areas from a document by image processing. Our fast title extraction method can automatically extract titles as keywords from business documents. All character strings are evaluated for similarity by rating points associated with title similarity. We classified these points as four items: character sitting size, position of character strings, relative position among character strings, and string attribution. Finally, the character string that has the highest rating is selected as the title area. The character recognition process is carried out on the selected area. It is fast because this process must recognize a small number of patterns in the restricted area only, and not throughout the entire document. The mean performance of this method is an accuracy of about 91 percent and a 1.8 sec. processing time for an examination of 100 Japanese business documents.

  18. Fast and automatic algorithm for optic disc extraction in retinal images using principle-component-analysis-based preprocessing and curvelet transform.

    PubMed

    Shahbeig, Saleh; Pourghassem, Hossein

    2013-01-01

    Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.

  19. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Automatic extraction of relations between medical concepts in clinical texts

    PubMed Central

    Harabagiu, Sanda; Roberts, Kirk

    2011-01-01

    Objective A supervised machine learning approach to discover relations between medical problems, treatments, and tests mentioned in electronic medical records. Materials and methods A single support vector machine classifier was used to identify relations between concepts and to assign their semantic type. Several resources such as Wikipedia, WordNet, General Inquirer, and a relation similarity metric inform the classifier. Results The techniques reported in this paper were evaluated in the 2010 i2b2 Challenge and obtained the highest F1 score for the relation extraction task. When gold standard data for concepts and assertions were available, F1 was 73.7, precision was 72.0, and recall was 75.3. F1 is defined as 2*Precision*Recall/(Precision+Recall). Alternatively, when concepts and assertions were discovered automatically, F1 was 48.4, precision was 57.6, and recall was 41.7. Discussion Although a rich set of features was developed for the classifiers presented in this paper, little knowledge mining was performed from medical ontologies such as those found in UMLS. Future studies should incorporate features extracted from such knowledge sources, which we expect to further improve the results. Moreover, each relation discovery was treated independently. Joint classification of relations may further improve the quality of results. Also, joint learning of the discovery of concepts, assertions, and relations may also improve the results of automatic relation extraction. Conclusion Lexical and contextual features proved to be very important in relation extraction from medical texts. When they are not available to the classifier, the F1 score decreases by 3.7%. In addition, features based on similarity contribute to a decrease of 1.1% when they are not available. PMID:21846787

  1. Automatic color preference correction for color reproduction

    NASA Astrophysics Data System (ADS)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  2. [Comparison of MPure-12 Automatic Nucleic Acid Purification and Chelex-100 Method].

    PubMed

    Shen, X; Li, M; Wang, Y L; Chen, Y L; Lin, Y; Zhao, Z M; Que, T Z

    2017-04-01

    To explore the forensic application value of MPure-12 automatic nucleic acid purification (MPure-12 Method) for DNA extraction by extracting and typing DNA from bloodstains and various kinds of biological samples with different DNA contents. Nine types of biological samples, such as bloodstains, semen stains, and saliva were collected. DNA were extracted using MPure-12 method and Chelex-100 method, followed by PCR amplification and electrophoresis for obtaining STR-profiles. The samples such as hair root, chutty, butt, muscular tissue, saliva stain, bloodstain and semen stain were typed successfully by MPure-12 method. Partial alleles were lacked in the samples of saliva, and the genotyping of contact swabs was unsatisfactory. Additional, all of the bloodstains (20 μL, 15 μL, 10 μL, 5 μL, 1 μL) showed good typing results using Chelex-100 method. But the loss of alleles occurred in 1 μL blood volume by MPure-12 method. MPure-12 method is suitable for DNA extraction of a certain concentration blood samples.Chelex-100 method may be better for the extraction of trace blood samples.This instrument used in nucleic acid extraction has the advantages of simplicity of operator, rapidity, high extraction efficiency, high rate of reportable STR-profiles and lower man-made pollution. Copyright© by the Editorial Department of Journal of Forensic Medicine

  3. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    PubMed

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  4. Automatic Picking of Foraminifera: Design of the Foraminifera Image Recognition and Sorting Tool (FIRST) Prototype and Results of the Image Classification Scheme

    NASA Astrophysics Data System (ADS)

    de Garidel-Thoron, T.; Marchant, R.; Soto, E.; Gally, Y.; Beaufort, L.; Bolton, C. T.; Bouslama, M.; Licari, L.; Mazur, J. C.; Brutti, J. M.; Norsa, F.

    2017-12-01

    Foraminifera tests are the main proxy carriers for paleoceanographic reconstructions. Both geochemical and taxonomical studies require large numbers of tests to achieve statistical relevance. To date, the extraction of foraminifera from the sediment coarse fraction is still done by hand and thus time-consuming. Moreover, the recognition of morphotypes, ecologically relevant, requires some taxonomical skills not easily taught. The automatic recognition and extraction of foraminifera would largely help paleoceanographers to overcome these issues. Recent advances in automatic image classification using machine learning opens the way to automatic extraction of foraminifera. Here we detail progress on the design of an automatic picking machine as part of the FIRST project. The machine handles 30 pre-sieved samples (100-1000µm), separating them into individual particles (including foraminifera) and imaging each in pseudo-3D. The particles are classified and specimens of interest are sorted either for Individual Foraminifera Analyses (44 per slide) and/or for classical multiple analyses (8 morphological classes per slide, up to 1000 individuals per hole). The classification is based on machine learning using Convolutional Neural Networks (CNNs), similar to the approach used in the coccolithophorid imaging system SYRACO. To prove its feasibility, we built two training image datasets of modern planktonic foraminifera containing approximately 2000 and 5000 images each, corresponding to 15 & 25 morphological classes. Using a CNN with a residual topology (ResNet) we achieve over 95% correct classification for each dataset. We tested the network on 160,000 images from 45 depths of a sediment core from the Pacific ocean, for which we have human counts. The current algorithm is able to reproduce the downcore variability in both Globigerinoides ruber and the fragmentation index (r2 = 0.58 and 0.88 respectively). The FIRST prototype yields some promising results for high-resolution paleoceanographic studies and evolutionary studies.

  5. By the People, for the People: the Crowdsourcing of "STREETBUMP": AN Automatic Pothole Mapping App

    NASA Astrophysics Data System (ADS)

    Carrera, F.; Guerin, S.; Thorp, J. B.

    2013-05-01

    This paper traces the genesis and development of StreetBump, a smartphone application to map the location of potholes in Boston, Massachusetts. StreetBump belongs to a special category of "subliminal" crowdsourcing mobile applications that turn humans into sensors. Once started, it automatically collects road condition information without any human intervention, using the accelerometers and GPS inside smartphones. The StreetBump app evolved from a hardware device designed and built by WPI's City Lab starting in 2003, which was originally intended to measure and map boat wakes in the city of Venice, Italy (Chiu, 2004). A second version of the custom hardware with onboard GPS and accelerometers was adapted to use in Boston, Massachusetts, to map road damage (potholes) in 2006 (Angelini, 2006). In 2009, Prof. Carrera proposed to the newly created office of New Urban Mechanics in the City of Boston to migrate the concept to Smartphones, based on the Android platform. The first prototype of the mobile app, called StreetBump, was released in 2010 by the authors (Harmon, 2010). In 2011, the app provided the basis for a worldwide Innocentive competition to develop the best postprocessing algorithms to identify the real potholes vs. other phone bumps (Moskowitz, 2011). Starting in 2012, the City of Boston has begun using a subsequent version of the app to operationally manage road repairs based on the data collected by StreetBump. The novelty of this app is not purely technological, but lies also in the top-to-bottom crowdsourcing of all its components. The app was designed to rely on the crowd to confirm the presence of damage though repeat hits (or lack thereof) as more users travel the same roads over time. Moreover, the non-trivial post-processing of the StreetBump data was itself the subject of a crowdsourced competition through an Innocentive challenge for the best algorithm. The release of the StreetBump code as open-source allowed the development of the final version of the app now used on a daily basis by the Department of Public Works in Boston, thus making it perhaps the first example of an app that was crowdsourced "from soup to nuts".

  6. Multimodal Teaching Analytics: Automated Extraction of Orchestration Graphs from Wearable Sensor Data

    ERIC Educational Resources Information Center

    Prieto, L. P.; Sharma, K.; Kidzinski, L.; Rodríguez-Triana, M. J.; Dillenbourg, P.

    2018-01-01

    The pedagogical modelling of everyday classroom practice is an interesting kind of evidence, both for educational research and teachers' own professional development. This paper explores the usage of wearable sensors and machine learning techniques to automatically extract orchestration graphs (teaching activities and their social plane over time)…

  7. Keyword Extraction from Arabic Legal Texts

    ERIC Educational Resources Information Center

    Rammal, Mahmoud; Bahsoun, Zeinab; Al Achkar Jabbour, Mona

    2015-01-01

    Purpose: The purpose of this paper is to apply local grammar (LG) to develop an indexing system which automatically extracts keywords from titles of Lebanese official journals. Design/methodology/approach: To build LG for our system, the first word that plays the determinant role in understanding the meaning of a title is analyzed and grouped as…

  8. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  9. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  10. Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking

    NASA Astrophysics Data System (ADS)

    He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.

    2018-04-01

    The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.

  11. An approach for automatic classification of grouper vocalizations with passive acoustic monitoring.

    PubMed

    Ibrahim, Ali K; Chérubin, Laurent M; Zhuang, Hanqi; Schärer Umpierre, Michelle T; Dalgleish, Fraser; Erdol, Nurgun; Ouyang, B; Dalgleish, A

    2018-02-01

    Grouper, a family of marine fishes, produce distinct vocalizations associated with their reproductive behavior during spawning aggregation. These low frequencies sounds (50-350 Hz) consist of a series of pulses repeated at a variable rate. In this paper, an approach is presented for automatic classification of grouper vocalizations from ambient sounds recorded in situ with fixed hydrophones based on weighted features and sparse classifier. Group sounds were labeled initially by humans for training and testing various feature extraction and classification methods. In the feature extraction phase, four types of features were used to extract features of sounds produced by groupers. Once the sound features were extracted, three types of representative classifiers were applied to categorize the species that produced these sounds. Experimental results showed that the overall percentage of identification using the best combination of the selected feature extractor weighted mel frequency cepstral coefficients and sparse classifier achieved 82.7% accuracy. The proposed algorithm has been implemented in an autonomous platform (wave glider) for real-time detection and classification of group vocalizations.

  12. Split Flow Online Solid-Phase Extraction Coupled with Inductively Coupled Plasma Mass Spectrometry System for One-Shot Data Acquisition of Quantification and Recovery Efficiency.

    PubMed

    Furukawa, Makoto; Takagai, Yoshitaka

    2016-10-04

    Online solid-phase extraction (SPE) coupled with inductively coupled plasma mass spectrometry (ICPMS) is a useful tool in automatic sequential analysis. However, it cannot simultaneously quantify the analytical targets and their recovery percentages (R%) in one-shot samples. We propose a system that simultaneously acquires both data in a single sample injection. The main flowline of the online solid-phase extraction is divided into main and split flows. The split flow line (i.e., bypass line), which circumvents the SPE column, was placed on the main flow line. Under program-controlled switching of the automatic valve, the ICPMS sequentially measures the targets in a sample before and after column preconcentration and determines the target concentrations and the R% on the SPE column. This paper describes the system development and two demonstrations to exhibit the analytical significance, i.e., the ultratrace amounts of radioactive strontium ( 90 Sr) using commercial Sr-trap resin and multielement adsorbability on the SPE column. This system is applicable to other flow analyses and detectors in online solid phase extraction.

  13. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  14. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  15. 50 CFR 22.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., extracting oil, natural gas and geothermal energy, construction of roads, dams, reservoirs, power plants..., POSSESSION, TRANSPORTATION, SALE, PURCHASE, BARTER, EXPORTATION, AND IMPORTATION OF WILDLIFE AND PLANTS... breeding, feeding, or sheltering behavior. Eagle nest means any readily identifiable structure built...

  16. 50 CFR 22.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., extracting oil, natural gas and geothermal energy, construction of roads, dams, reservoirs, power plants..., POSSESSION, TRANSPORTATION, SALE, PURCHASE, BARTER, EXPORTATION, AND IMPORTATION OF WILDLIFE AND PLANTS... breeding, feeding, or sheltering behavior. Eagle nest means any readily identifiable structure built...

  17. 2. Historic American Buildings Survey, William F. Winter, Jr., Photographer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Historic American Buildings Survey, William F. Winter, Jr., Photographer 1920's, EXTRACTING ROOM, Gift of New York State Department of Education. - Shaker Centre Family Medicine Factory, Shaker Road, New Lebanon, Columbia County, NY

  18. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    PubMed Central

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-01-01

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900

  19. Using Quasi-Horizontal Alignment in the absence of the actual alignment.

    PubMed

    Banihashemi, Mohamadreza

    2016-10-01

    Horizontal alignment is a major roadway characteristic used in safety and operational evaluations of many facility types. The Highway Safety Manual (HSM) uses this characteristic in crash prediction models for rural two-lane highways, freeway segments, and freeway ramps/C-D roads. Traffic simulation models use this characteristic in their processes on almost all types of facilities. However, a good portion of roadway databases do not include horizontal alignment data; instead, many contain point coordinate data along the roadways. SHRP 2 Roadway Information Database (RID) is a good example of this type of data. Only about 5% of this geodatabase contains alignment information and for the rest, point data can easily be produced. Even though the point data can be used to extract actual horizontal alignment data but, extracting horizontal alignment is a cumbersome and costly process, especially for a database of miles and miles of highways. This research introduces a so called "Quasi-Horizontal Alignment" that can be produced easily and automatically from point coordinate data and can be used in the safety and operational evaluations of highways. SHRP 2 RID for rural two-lane highways in Washington State is used in this study. This paper presents a process through which Quasi-Horizontal Alignments are produced from point coordinates along highways by using spreadsheet software such as MS EXCEL. It is shown that the safety and operational evaluations of the highways with Quasi-Horizontal Alignments are almost identical to the ones with the actual alignments. In the absence of actual alignment the Quasi-Horizontal Alignment can easily be produced from any type of databases that contain highway coordinates such geodatabases and digital maps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Analysis of vehicular traffic flow in the major areas of Kuala Lumpur utilizing open-traffic

    NASA Astrophysics Data System (ADS)

    Manogaran, Saargunawathy; Ali, Muhammad; Yusof, Kamaludin Mohamad; Suhaili, Ramdhan

    2017-09-01

    Vehicular traffic congestion occurs when a large number of drivers are overcrowded on the road and the traffic flow does not run smoothly. Traffic congestion causes chaos on the road and interruption to daily activities of users. Time consumed on road give lots of negative effects on productivity, social behavior, environmental and cost to economy. Congestion is worsens and leads to havoc during the emergency such as flood, accidents, road maintenance and etc., where behavior of traffic flow is always unpredictable and uncontrollable. Real-time and historical traffic data are critical inputs for most traffic flow analysis applications. Researcher attempt to predict traffic using simulations as there is no exact model of traffic flow exists due to its high complexity. Open Traffic is an open source platform available for traffic data analysis linked to Open Street Map (OSM). This research is aimed to study and understand the Open Traffic platform. The real-time traffic flow pattern in Kuala Lumpur area was successfully been extracted and analyzed using Open Traffic. It was observed that the congestion occurs on every major road in Kuala Lumpur and most of it owes to the offices and the economic and commercial centers during rush hours. At some roads the congestion occurs at night due to the tourism activities.

  1. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  2. The NavTrax fleet management system

    NASA Astrophysics Data System (ADS)

    McLellan, James F.; Krakiwsky, Edward J.; Schleppe, John B.; Knapp, Paul L.

    The NavTrax System, a dispatch-type automatic vehicle location and navigation system, is discussed. Attention is given to its positioning, communication, digital mapping, and dispatch center components. The positioning module is a robust GPS (Global Positioning System)-based system integrated with dead reckoning devices by a decentralized-federated filter, making the module fault tolerant. The error behavior and characteristics of GPS, rate gyro, compass, and odometer sensors are discussed. The communications module, as presently configured, utilizes UHF radio technology, and plans are being made to employ a digital cellular telephone system. Polling and automatic smart vehicle reporting are also discussed. The digital mapping component is an intelligent digital single line road network database stored in vector form with full connectivity and address ranges. A limited form of map matching is performed for the purposes of positioning, but its main purpose is to define location once position is determined.

  3. Three-dimensional slum urban reconstruction in Envisat and Google Earth Egypt

    NASA Astrophysics Data System (ADS)

    Marghany, M.; Genderen, J. v.

    2014-02-01

    This study aims to aim to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed country such as Egypt. The main objective of this work is to utilize 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slam. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.

  4. Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging

    NASA Astrophysics Data System (ADS)

    Litkey, P.; Nurminen, K.; Honkavaara, E.

    2013-05-01

    The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.

  5. Continuous nucleus extraction by optically-induced cell lysis on a batch-type microfluidic platform.

    PubMed

    Huang, Shih-Hsuan; Hung, Lien-Yu; Lee, Gwo-Bin

    2016-04-21

    The extraction of a cell's nucleus is an essential technique required for a number of procedures, such as disease diagnosis, genetic replication, and animal cloning. However, existing nucleus extraction techniques are relatively inefficient and labor-intensive. Therefore, this study presents an innovative, microfluidics-based approach featuring optically-induced cell lysis (OICL) for nucleus extraction and collection in an automatic format. In comparison to previous micro-devices designed for nucleus extraction, the new OICL device designed herein is superior in terms of flexibility, selectivity, and efficiency. To facilitate this OICL module for continuous nucleus extraction, we further integrated an optically-induced dielectrophoresis (ODEP) module with the OICL device within the microfluidic chip. This on-chip integration circumvents the need for highly trained personnel and expensive, cumbersome equipment. Specifically, this microfluidic system automates four steps by 1) automatically focusing and transporting cells, 2) releasing the nuclei on the OICL module, 3) isolating the nuclei on the ODEP module, and 4) collecting the nuclei in the outlet chamber. The efficiency of cell membrane lysis and the ODEP nucleus separation was measured to be 78.04 ± 5.70% and 80.90 ± 5.98%, respectively, leading to an overall nucleus extraction efficiency of 58.21 ± 2.21%. These results demonstrate that this microfluidics-based system can successfully perform nucleus extraction, and the integrated platform is therefore promising in cell fusion technology with the goal of achieving genetic replication, or even animal cloning, in the near future.

  6. Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.

    PubMed

    Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo

    2016-09-01

    In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.

  7. Automatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China.

    PubMed

    Zhang, Qiuwen; Zhang, Yan; Yang, Xiaohong; Su, Bin

    2014-01-01

    In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS) and Geographical Information System (GIS) can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM) can be used to visually query and display the distribution of seismic intensity.

  8. Fully automatic oil spill detection from COSMO-SkyMed imagery using a neural network approach

    NASA Astrophysics Data System (ADS)

    Avezzano, Ruggero G.; Del Frate, Fabio; Latini, Daniele

    2012-09-01

    The increased amount of available Synthetic Aperture Radar (SAR) images acquired over the ocean represents an extraordinary potential for improving oil spill detection activities. On the other side this involves a growing workload on the operators at analysis centers. In addition, even if the operators go through extensive training to learn manual oil spill detection, they can provide different and subjective responses. Hence, the upgrade and improvements of algorithms for automatic detection that can help in screening the images and prioritizing the alarms are of great benefit. In the framework of an ASI Announcement of Opportunity for the exploitation of COSMO-SkyMed data, a research activity (ASI contract L/020/09/0) aiming at studying the possibility to use neural networks architectures to set up fully automatic processing chains using COSMO-SkyMed imagery has been carried out and results are presented in this paper. The automatic identification of an oil spill is seen as a three step process based on segmentation, feature extraction and classification. We observed that a PCNN (Pulse Coupled Neural Network) was capable of providing a satisfactory performance in the different dark spots extraction, close to what it would be produced by manual editing. For the classification task a Multi-Layer Perceptron (MLP) Neural Network was employed.

  9. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  10. Text mining patents for biomedical knowledge.

    PubMed

    Rodriguez-Esteban, Raul; Bundschus, Markus

    2016-06-01

    Biomedical text mining of scientific knowledge bases, such as Medline, has received much attention in recent years. Given that text mining is able to automatically extract biomedical facts that revolve around entities such as genes, proteins, and drugs, from unstructured text sources, it is seen as a major enabler to foster biomedical research and drug discovery. In contrast to the biomedical literature, research into the mining of biomedical patents has not reached the same level of maturity. Here, we review existing work and highlight the associated technical challenges that emerge from automatically extracting facts from patents. We conclude by outlining potential future directions in this domain that could help drive biomedical research and drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Automatic feature design for optical character recognition using an evolutionary search procedure.

    PubMed

    Stentiford, F W

    1985-03-01

    An automatic evolutionary search is applied to the problem of feature extraction in an OCR application. A performance measure based on feature independence is used to generate features which do not appear to suffer from peaking effects [17]. Features are extracted from a training set of 30 600 machine printed 34 class alphanumeric characters derived from British mail. Classification results on the training set and a test set of 10 200 characters are reported for an increasing number of features. A 1.01 percent forced decision error rate is obtained on the test data using 316 features. The hardware implementation should be cheap and fast to operate. The performance compares favorably with current low cost OCR page readers.

  12. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    NASA Astrophysics Data System (ADS)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  13. Semi automatic indexing of PostScript files using Medical Text Indexer in medical education.

    PubMed

    Mollah, Shamim Ara; Cimino, Christopher

    2007-10-11

    At Albert Einstein College of Medicine a large part of online lecture materials contain PostScript files. As the collection grows it becomes essential to create a digital library to have easy access to relevant sections of the lecture material that is full-text indexed; to create this index it is necessary to extract all the text from the document files that constitute the originals of the lectures. In this study we present a semi automatic indexing method using robust technique for extracting text from PostScript files and National Library of Medicine's Medical Text Indexer (MTI) program for indexing the text. This model can be applied to other medical schools for indexing purposes.

  14. Independent component analysis for automatic note extraction from musical trills

    NASA Astrophysics Data System (ADS)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  15. Automatic quantification of morphological features for hepatic trabeculae analysis in stained liver specimens

    PubMed Central

    Ishikawa, Masahiro; Murakami, Yuri; Ahi, Sercan Taha; Yamaguchi, Masahiro; Kobayashi, Naoki; Kiyuna, Tomoharu; Yamashita, Yoshiko; Saito, Akira; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2016-01-01

    Abstract. This paper proposes a digital image analysis method to support quantitative pathology by automatically segmenting the hepatocyte structure and quantifying its morphological features. To structurally analyze histopathological hepatic images, we isolate the trabeculae by extracting the sinusoids, fat droplets, and stromata. We then measure the morphological features of the extracted trabeculae, divide the image into cords, and calculate the feature values of the local cords. We propose a method of calculating the nuclear–cytoplasmic ratio, nuclear density, and number of layers using the local cords. Furthermore, we evaluate the effectiveness of the proposed method using surgical specimens. The proposed method was found to be an effective method for the quantification of the Edmondson grade. PMID:27335894

  16. Detection of the local sliding in the tyre-road contact by measuring vibrations on the inner liner of the tyre

    NASA Astrophysics Data System (ADS)

    Niskanen, Arto J.; Tuononen, Ari J.

    2017-04-01

    Intelligent tyres can provide vital information from the tyre-road contact, especially for autonomous cars and intelligent infrastructure. In this paper, the acceleration measured on the inner liner of a tyre is used to detect the local sliding in the tyre-road contact. The Hilbert-Huang transform is utilized to extract the relevant vibration components and localize them in the wheel rotation angle domain. The energy of the vibration in the trailing part of the contact is shown to increase in low-friction conditions which can be related to the sliding of the tread part as a result of the shear stresses exceeding the local friction limit. To separate the effect of the surface roughness and the friction, different road surfaces were used in the measurements. In addition, the effects of different driving manoeuvres on the measured accelerations and the propagation of the sliding zone in the contact patch during braking are illustrated.

  17. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    PubMed

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Simultaneous extraction of centerlines, stenosis, and thrombus detection in renal CT angiography

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Durgan, Jacob; Hodgkiss, Thomas D.; Chandra, Shalabh

    2004-05-01

    The Renal Artery Stenosis (RAS) is the major cause of renovascular hypertension and CT angiography has shown tremendous promise as a noninvasive method for reliably detecting renal artery stenosis. The purpose of this study was to validate the semi-automated methods to assist in extraction of renal branches and characterizing the associated renal artery stenosis. Automatically computed diagnostic images such as straight MIP, curved MPR, cross-sections, and diameters from multi-slice CT are presented and evaluated for its acceptance. We used vessel-tracking image processing methods to extract the aortic-renal vessel tree in a CT data in axial slice images. Next, from the topology and anatomy of the aortic vessel tree, the stenosis, and thrombus section and branching of the renal arteries are extracted. The results are presented in curved MPR and continuously variable MIP images. In this study, 15 patients were scanned with contrast on Mx8000 CT scanner (Philips Medical Systems), with 1.0 mm thickness, 0.5mm slice spacing, and 120kVp and a stack of 512x512x150 volume sets were reconstructed. The automated image processing took less than 50 seconds to compute the centerline and borders of the aortic/renal vessel tree. The overall assessment of manual and automatically generated stenosis yielded a weighted kappa statistic of 0.97 at right renal arteries, 0.94 at the left renal branches. The thrombus region contoured manually and semi-automatically agreed upon at 0.93. The manual time to process each case is approximately 25 to 30 minutes.

  19. Development of portable health monitoring system for automatic self-blood glucose measurement

    NASA Astrophysics Data System (ADS)

    Kim, Huijun; Mizuno, Yoshihumi; Nakamachi, Eiji; Morita, Yusuke

    2010-02-01

    In this study, a new HMS (Health Monitoring System) device is developed for diabetic patient. This device mainly consists of I) 3D blood vessel searching unit and II) automatic blood glucose measurement (ABGM) unit. This device has features such as 1)3D blood vessel location search 2) laptop type, 3) puncturing a blood vessel by using a minimally invasive micro-needle, 4) very little blood sampling (10μl), and 5) automatic blood extraction and blood glucose measurement. In this study, ABGM unit is described in detail. It employs a syringe type's blood extraction mechanism because of its high accuracy. And it consists of the syringe component and the driving component. The syringe component consists of a syringe itself, a piston, a magnet, a ratchet and a micro-needle whose inner diameter is about 80μm. And the syringe component is disposable. The driving component consists of body parts, a linear stepping motor, a glucose enzyme sensor and a slider for accurate positioning control. The driving component has the all-in-one mechanism with a glucose enzyme sensor for compact size and stable blood transfer. On designing, required thrust force to drive the slider is designed to be greater than the value of the blood extraction force. Further, only one linear stepping motor is employed for blood extraction and transportation processes. The experimental result showed more than 80% of volume ratio under the piston speed 2.4mm/s. Further, the blood glucose was measured successfully by using the prototype unit. Finally, the availability of our ABGM unit was confirmed.

  20. Vision, Training Hours, and Road Testing Results in Bioptic Drivers

    PubMed Central

    Dougherty, Bradley E.; Flom, Roanne E.; Bullimore, Mark A.; Raasch, Thomas W.

    2015-01-01

    Purpose Bioptic telescopic spectacles (BTS) can be used by people with central visual acuity that does not meet the state standards to obtain an unrestricted driver’s license. The purpose of this study was to examine the relationships among visual and demographic factors, training hours, and the results of road testing for bioptic drivers. Methods A retrospective study of patients who received an initial daylight bioptic examination at the Ohio State University and subsequently received a bioptic license was conducted. Data were collected on vision including visual acuity, contrast sensitivity, and visual field. Hours of driver training and results of Highway Patrol road testing were extracted from records. Relationships among vision, training hours, and road testing were analyzed. Results Ninety-seven patients who completed a vision examination between 2004 and 2008 and received daylight licensure with BTS were included. Results of the first Highway Patrol road test were available for 74 patients. The median interquartile range (IQR) hours of training prior to road testing was 21±17 hours, (range of 9 to 75 hours). Candidates without previous licensure were younger (p< 0.001) and had more documented training (p< 0.001). Lack of previous licensure and more training were significantly associated with having failed a portion of the Highway Patrol test and points deducted on the road test. Conclusions New bioptic drivers without previous non-bioptic driving experience required more training and performed more poorly on road testing for licensure than those who had previous non-bioptic licensure. No visual factor was predictive of road testing results after adjustment for previous experience. The hours of training received remained predictive of road testing outcome even with adjustment for previous experience. These results suggest that previous experience and trainer assessments should be investigated as potential predictors of road safety in bioptic drivers in future studies. PMID:25946098

Top