Sample records for automatic cloud detection

  1. A cloud-based system for automatic glaucoma screening.

    PubMed

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.

  2. Cloud Detection from Satellite Imagery: A Comparison of Expert-Generated and Automatically-Generated Decision Trees

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar

    2004-01-01

    Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.

  3. Robust Spacecraft Component Detection in Point Clouds.

    PubMed

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  4. Robust Spacecraft Component Detection in Point Clouds

    PubMed Central

    Wei, Quanmao; Jiang, Zhiguo

    2018-01-01

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828

  5. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  6. Evaluation of Decision Trees for Cloud Detection from AVHRR Data

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Nemani, Ramakrishna

    2005-01-01

    Automated cloud detection and tracking is an important step in assessing changes in radiation budgets associated with global climate change via remote sensing. Data products based on satellite imagery are available to the scientific community for studying trends in the Earth's atmosphere. The data products include pixel-based cloud masks that assign cloud-cover classifications to pixels. Many cloud-mask algorithms have the form of decision trees. The decision trees employ sequential tests that scientists designed based on empirical astrophysics studies and simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In a previous study we compared automatically learned decision trees to cloud masks included in Advanced Very High Resolution Radiometer (AVHRR) data products from the year 2000. In this paper we report the replication of the study for five-year data, and for a gold standard based on surface observations performed by scientists at weather stations in the British Islands. For our sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks p < 0.001.

  7. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    NASA Astrophysics Data System (ADS)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  8. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  9. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.

  10. Automatic Mosaicking of Satellite Imagery Considering the Clouds

    NASA Astrophysics Data System (ADS)

    Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang

    2016-06-01

    With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.

  11. Results from Automated Cloud and Dust Devil Detection Onboard the MER

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Castano, Rebecca; Bornstein, Benjamin; Fukunaga, Alex; Castano, Andres; Biesiadecki, Jeffrey; Greeley, Ron; Whelley, Patrick; Lemmon, Mark

    2008-01-01

    We describe a new capability to automatically detect dust devils and clouds in imagery onboard rovers, enabling downlink of just the images with the targets or only portions of the images containing the targets. Previously, the MER rovers conducted campaigns to image dust devils and clouds by commanding a set of images be collected at fixed times and downloading the entire image set. By increasing the efficiency of the campaigns, more campaigns can be executed. Software for these new capabilities was developed, tested, integrated, uploaded, and operationally checked out on both rovers as part of the R9.2 software upgrade. In April 2007 on Sol 1147 a dust devil was automatically detected onboard the Spirit rover for the first time. We discuss the operational usage of the capability and present initial dust devil results showing how this preliminary application has demonstrated the feasibility and potential benefits of the approach.

  12. Cloud-Free Satellite Image Mosaics with Regression Trees and Histogram Matching.

    Treesearch

    E.H. Helmer; B. Ruefenacht

    2005-01-01

    Cloud-free optical satellite imagery simplifies remote sensing, but land-cover phenology limits existing solutions to persistent cloudiness to compositing temporally resolute, spatially coarser imagery. Here, a new strategy for developing cloud-free imagery at finer resolution permits simple automatic change detection. The strategy uses regression trees to predict...

  13. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  14. [Study of automatic marine oil spills detection using imaging spectroscopy].

    PubMed

    Liu, De-Lian; Han, Liang; Zhang, Jian-Qi

    2013-11-01

    To reduce artificial auxiliary works in oil spills detection process, an automatic oil spill detection method based on adaptive matched filter is presented. Firstly, the characteristics of reflectance spectral signature of C-H bond in oil spill are analyzed. And an oil spill spectral signature extraction model is designed by using the spectral feature of C-H bond. It is then used to obtain the reference spectral signature for the following oil spill detection step. Secondly, the characteristics of reflectance spectral signature of sea water, clouds, and oil spill are compared. The bands which have large difference in reflectance spectral signatures of the sea water, clouds, and oil spill are selected. By using these bands, the sea water pixels are segmented. And the background parameters are then calculated. Finally, the classical adaptive matched filter from target detection algorithms is improved and introduced for oil spill detection. The proposed method is applied to the real airborne visible infrared imaging spectrometer (AVIRIS) hyperspectral image captured during the deepwater horizon oil spill in the Gulf of Mexico for oil spill detection. The results show that the proposed method has, high efficiency, does not need artificial auxiliary work, and can be used for automatic detection of marine oil spill.

  15. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  16. Automated Detection of Clouds in Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary

    2010-01-01

    Many different approaches have been used to automatically detect clouds in satellite imagery. Most approaches are deterministic and provide a binary cloud - no cloud product used in a variety of applications. Some of these applications require the identification of cloudy pixels for cloud parameter retrieval, while others require only an ability to mask out clouds for the retrieval of surface or atmospheric parameters in the absence of clouds. A few approaches estimate a probability of the presence of a cloud at each point in an image. These probabilities allow a user to select cloud information based on the tolerance of the application to uncertainty in the estimate. Many automated cloud detection techniques develop sophisticated tests using a combination of visible and infrared channels to determine the presence of clouds in both day and night imagery. Visible channels are quite effective in detecting clouds during the day, as long as test thresholds properly account for variations in surface features and atmospheric scattering. Cloud detection at night is more challenging, since only courser resolution infrared measurements are available. A few schemes use just two infrared channels for day and night cloud detection. The most influential factor in the success of a particular technique is the determination of the thresholds for each cloud test. The techniques which perform the best usually have thresholds that are varied based on the geographic region, time of year, time of day and solar angle.

  17. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    NASA Astrophysics Data System (ADS)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  18. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.

  19. Automatic detection of zebra crossings from mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  20. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  1. Automatic Temporal Tracking of Supra-Glacial Lakes

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Lv, Q.; Gallaher, D. W.; Fanning, D.

    2010-12-01

    During the recent years, supra-glacial lakes in Greenland have attracted extensive global attention as they potentially play an important role in glacier movement, sea level rise, and climate change. Previous works focused on classification methods and individual cloud-free satellite images, which have limited capabilities in terms of tracking changes of lakes over time. The challenges of tracking supra-glacial lakes automatically include (1) massive amount of satellite images with diverse qualities and frequent cloud coverage, and (2) diversity and dynamics of large number of supra-glacial lakes on the Greenland ice sheet. In this study, we develop an innovative method to automatically track supra-glacial lakes temporally using the Moderate Resolution Imaging Spectroradiometer (MODIS) time-series data. The method works for both cloudy and cloud-free data and is unsupervised, i.e., no manual identification is required. After selecting the highest-quality image within each time interval, our method automatically detects supra-glacial lakes in individual images, using adaptive thresholding to handle diverse image qualities. We then track lakes across time series of images as lakes appear, change in size, and disappear. Using multi-year MODIS data during melting season, we demonstrate that this new method can detect and track supra-glacial lakes in both space and time with 95% accuracy. Attached figure shows an example of the current result. Detailed analysis of the temporal variation of detected lakes will be presented. (a) One of our experimental data. The Investigated region is centered at Jakobshavn Isbrae glacier in west Greenland. (b) Enlarged view of part of ice sheet. It is partially cloudy and with supra-glacial lakes on it. Lakes are shown as dark spots. (c) Current result. Red spots are detected lakes.

  2. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  3. A Method for the Automatic Detection of Insect Clutter in Doppler-Radar Returns.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luke,E.; Kollias, P.; Johnson, K.

    2006-06-12

    The accurate detection and removal of insect clutter from millimeter wavelength cloud radar (MMCR) returns is of high importance to boundary layer cloud research (e.g., Geerts et al., 2005). When only radar Doppler moments are available, it is difficult to produce a reliable screening of insect clutter from cloud returns because their distributions overlap. Hence, screening of MMCR insect clutter has historically involved a laborious manual process of cross-referencing radar moments against measurements from other collocated instruments, such as lidar. Our study looks beyond traditional radar moments to ask whether analysis of recorded Doppler spectra can serve as the basismore » for reliable, automatic insect clutter screening. We focus on the MMCR operated by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program at its Southern Great Plains (SGP) facility in Oklahoma. Here, archiving of full Doppler spectra began in September 2003, and during the warmer months, a pronounced insect presence regularly introduces clutter into boundary layer returns.« less

  4. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  5. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  6. Standoff detection of bioaerosols over wide area using a newly developed sensor combining a cloud mapper and a spectrometric LIF lidar

    NASA Astrophysics Data System (ADS)

    Buteau, Sylvie; Simard, Jean-Robert; Roy, Gilles; Lahaie, Pierre; Nadeau, Denis; Mathieu, Pierre

    2013-10-01

    A standoff sensor called BioSense was developed to demonstrate the capacity to map, track and classify bioaerosol clouds from a distant range and over wide area. The concept of the system is based on a two steps dynamic surveillance: 1) cloud detection using an infrared (IR) scanning cloud mapper and 2) cloud classification based on a staring ultraviolet (UV) Laser Induced Fluorescence (LIF) interrogation. The system can be operated either in an automatic surveillance mode or using manual intervention. The automatic surveillance operation includes several steps: mission planning, sensor deployment, background monitoring, surveillance, cloud detection, classification and finally alarm generation based on the classification result. One of the main challenges is the classification step which relies on a spectrally resolved UV LIF signature library. The construction of this library relies currently on in-chamber releases of various materials that are simultaneously characterized with the standoff sensor and referenced with point sensors such as Aerodynamic Particle Sizer® (APS). The system was tested at three different locations in order to evaluate its capacity to operate in diverse types of surroundings and various environmental conditions. The system showed generally good performances even though the troubleshooting of the system was not completed before initiating the Test and Evaluation (T&E) process. The standoff system performances appeared to be highly dependent on the type of challenges, on the climatic conditions and on the period of day. The real-time results combined with the experience acquired during the 2012 T & E allowed to identify future ameliorations and investigation avenues.

  7. Cloud detection method for Chinese moderate high resolution satellite imagery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhong, Bo; Chen, Wuhan; Wu, Shanlong; Liu, Qinhuo

    2016-10-01

    Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors don't have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called OCM (Object oriented Cloud and cloud-shadow Matching method) is presented in this paper. It firstly modified the Automatic Cloud Cover Assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold setting produces different cloud map. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Secondly, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Thirdly, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. OCM method was tested using almost 200 HJ-1/CCD images across China and the overall accuracy of cloud detection is close to 90%.

  8. Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.

    2015-08-01

    Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.

  9. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  10. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  11. Automatic identification of watercourses in flat and engineered landscapes by computing the skeleton of a LiDAR point cloud

    NASA Astrophysics Data System (ADS)

    Broersen, Tom; Peters, Ravi; Ledoux, Hugo

    2017-09-01

    Drainage networks play a crucial role in protecting land against floods. It is therefore important to have an accurate map of the watercourses that form the drainage network. Previous work on the automatic identification of watercourses was typically based on grids, focused on natural landscapes, and used mostly the slope and curvature of the terrain. We focus in this paper on areas that are characterised by low-lying, flat, and engineered landscapes; these are characteristic to the Netherlands for instance. We propose a new methodology to identify watercourses automatically from elevation data, it uses solely a raw classified LiDAR point cloud as input. We show that by computing twice a skeleton of the point cloud-once in 2D and once in 3D-and that by using the properties of the skeletons we can identify most of the watercourses. We have implemented our methodology and tested it for three different soil types around Utrecht, the Netherlands. We were able to detect 98% of the watercourses for one soil type, and around 75% for the worst case, when we compared to a reference dataset that was obtained semi-automatically.

  12. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  13. LiDAR Point Cloud and Stereo Image Point Cloud Fusion

    DTIC Science & Technology

    2013-09-01

    LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as

  14. Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data

    PubMed Central

    Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc

    2017-01-01

    Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189

  15. A cloud masking algorithm for EARLINET lidar systems

    NASA Astrophysics Data System (ADS)

    Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina

    2015-04-01

    Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.

  16. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  17. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  18. Automatic Road Sign Inventory Using Mobile Mapping Systems

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.

    2016-06-01

    The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.

  19. Tracing Low-Mass Star Formation in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Petr-Gotzens, Monika; Zivkov, V.; Oliveira, J.

    2017-06-01

    Star formation in low metallicity environments is evidently occurring under different conditions than in our Milky Way. Lower metallicity implies a lower dust to gas ratio, most likely leading to less cooling efficiency at high density molecular cores where low mass stars are expected to form. We outline a project that aims to identify the low mass pre-main sequence populations within the Large and Small Magellanic Cloud. We developed an automatic detection algorithm that systematically analyses near-infrared colour-magnitude diagrammes constructed from the VMC (VISTA Magellanic Clouds) public survey data. In this poster we present our first results that show that we are able to detect significant numbers of PMS stars with masses down to 1.5 solar mass.

  20. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  1. First results of cirrus clouds properties by means of a pollyxt raman lidar at two measurement sites

    NASA Astrophysics Data System (ADS)

    Voudouri, Kalliopi-Artemis; Giannakaki, Elina; Komppula, Mika; Balis, Dimitris

    2018-04-01

    Geometrical and optical characteristics of cirrus clouds using Raman lidar PollyXT measurements at different locations are presented. The PollyXT has been participated in two long-term experimental campaigns, one close to New Delhi in India and one at Elandsfontein in South Africa, providing continuous measurements and covering a wide range of cloud types. First results of cirrus cloud properties at different latitudes, as well as their temporal distributions are presented in this study. An automatic cirrus clouds detection algorithm is applied based on the wavelet covariance transform. The measurements at New Delhi performed from March 2008 to February 2009, while at Elandsfontein measurements were performed from December 2009 to January 2011.

  2. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  3. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  4. The Optical Gravitational Lensing Experiment. Eclipsing Binary Stars in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M. K.; Zebrun, K.; Soszynski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.

    2004-03-01

    We present new version of the OGLE-II catalog of eclipsing binary stars detected in the Small Magellanic Cloud, based on Difference Image Analysis catalog of variable stars in the Magellanic Clouds containing data collected from 1997 to 2000. We found 1351 eclipsing binary stars in the central 2.4 square degree area of the SMC. 455 stars are newly discovered objects, not found in the previous release of the catalog. The eclipsing objects were selected with the automatic search algorithm based on the artificial neural network. The full catalog is accessible from the OGLE Internet archive.

  5. An Automatic Prediction of Epileptic Seizures Using Cloud Computing and Wireless Sensor Networks.

    PubMed

    Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar

    2016-11-01

    Epilepsy is one of the most common neurological disorders which is characterized by the spontaneous and unforeseeable occurrence of seizures. An automatic prediction of seizure can protect the patients from accidents and save their life. In this article, we proposed a mobile-based framework that automatically predict seizures using the information contained in electroencephalography (EEG) signals. The wireless sensor technology is used to capture the EEG signals of patients. The cloud-based services are used to collect and analyze the EEG data from the patient's mobile phone. The features from the EEG signal are extracted using the fast Walsh-Hadamard transform (FWHT). The Higher Order Spectral Analysis (HOSA) is applied to FWHT coefficients in order to select the features set relevant to normal, preictal and ictal states of seizure. We subsequently exploit the selected features as input to a k-means classifier to detect epileptic seizure states in a reasonable time. The performance of the proposed model is tested on Amazon EC2 cloud and compared in terms of execution time and accuracy. The findings show that with selected HOS based features, we were able to achieve a classification accuracy of 94.6 %.

  6. Hyperspectrally-Resolved Surface Emissivity Derived Under Optically Thin Clouds

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2010-01-01

    Surface spectral emissivity derived from current and future satellites can and will reveal critical information about the Earth s ecosystem and land surface type properties, which can be utilized as a means of long-term monitoring of global environment and climate change. Hyperspectrally-resolved surface emissivities are derived with an algorithm utilizes a combined fast radiative transfer model (RTM) with a molecular RTM and a cloud RTM accounting for both atmospheric absorption and cloud absorption/scattering. Clouds are automatically detected and cloud microphysical parameters are retrieved; and emissivity is retrieved under clear and optically thin cloud conditions. This technique separates surface emissivity from skin temperature by representing the emissivity spectrum with eigenvectors derived from a laboratory measured emissivity database; in other words, using the constraint as a means for the emissivity to vary smoothly across atmospheric absorption lines. Here we present the emissivity derived under optically thin clouds in comparison with that under clear conditions.

  7. Development of dual-wavelength Mie polarization Raman lidar for aerosol and cloud vertical structure probing

    NASA Astrophysics Data System (ADS)

    Wang, Zhenzhu; Liu, Dong; Wang, Yingjian; Wang, Bangxin; Zhong, Zhiqing; Xie, Chenbo; Wu, Decheng; Bo, Guangyu; Shao, Jie

    2014-11-01

    A Dual-wavelength Mie Polarization Raman Lidar has been developed for cloud and aerosol optical properties measurement. This idar system has built in Hefei and passed the performance assessment in 2012, and then moved to Jinhua city to carry out the long-term continuous measurements of vertical distribution of regional cloud and aerosol. A double wavelengths (532 and 1064 nm) Nd-YAG laser is employed as emitting source and four channels are used for detecting back-scattering signals from atmosphere aerosol and cloud including 1064 nm Mie, 607 nm N2 Raman, two 532 nm Orthogonal Polarization channels. The temporal and spatial resolutions for this system, which is operating with a continuing mode (24/7) automatically, are 30s and 7.5m, respectively. The measured data are used for investigating the aerosol and cloud vertical structure and cloud phase from combining of cloud signal intensity, polarization ratio and color ratio.

  8. Automatic photointerpretation for land use management in Minnesota

    NASA Technical Reports Server (NTRS)

    Swanlund, G. D. (Principal Investigator); Pile, D. R.

    1973-01-01

    The author has identified the following significant results. Primary conclusions from the lake acreage study are: (1) The ERTS-1 band 7 density range of 0-5 reliably indicates open water down to 2 acre size. (2) The density range 6-9 identifies swamps. (3) The depth of the water could not be determined. (4) Cloud shadows can be misread as lakes unless the clouds are detected. (5) ERTS-1 data would provide the information for classifying lakes and for monitoring fluctuations in lake area.

  9. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.

  10. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  11. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  12. A search for the 13175 A infrared diffuse band in dense environments

    NASA Technical Reports Server (NTRS)

    Adamson, A. J.; Kerr, Tom H.; Whittet, D. C. B.; Duley, Walter W.

    1994-01-01

    Models of ionized interstellar C60 predict a strong transition in the 1.2 micrometer region, and two candidate bands have recently been detected in reddened stars. We have searched for the stronger of these bands (at 13175 A) in the Taurus dark cloud complex, to determine its response to the dark-cloud environment. None of the three lines of sight studied (two near the cloud surface, one reaching A(sub V) greater than 20(sup m)) give rise to a detectable band; in one case the equivalent width is a factor of order three below that predicted. Since such behaviour is also shown by the optical Diffuse Interstellar Bands, we suggest that the 13175 A band is a genuine DIB, but we caution against an automatic interpretation in terms of an ionic carrier.

  13. A Cloud-Based System for Automatic Hazard Monitoring from Sentinel-1 SAR Data

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Arko, S. A.; Hogenson, K.; McAlpin, D. B.; Whitley, M. A.

    2017-12-01

    Despite the all-weather capabilities of Synthetic Aperture Radar (SAR), and its high performance in change detection, the application of SAR for operational hazard monitoring was limited in the past. This has largely been due to high data costs, slow product delivery, and limited temporal sampling associated with legacy SAR systems. Only since the launch of ESA's Sentinel-1 sensors have routinely acquired and free-of-charge SAR data become available, allowing—for the first time—for a meaningful contribution of SAR to disaster monitoring. In this paper, we present recent technical advances of the Sentinel-1-based SAR processing system SARVIEWS, which was originally built to generate hazard products for volcano monitoring centers. We outline the main functionalities of SARVIEWS including its automatic database interface to Sentinel-1 holdings of the Alaska Satellite Facility (ASF), and its set of automatic processing techniques. Subsequently, we present recent system improvements that were added to SARVIEWS and allowed for a vast expansion of its hazard services; specifically: (1) In early 2017, the SARVIEWS system was migrated into the Amazon Cloud, providing access to cloud capabilities such as elastic scaling of compute resources and cloud-based storage; (2) we co-located SARVIEWS with ASF's cloud-based Sentinel-1 archive, enabling the efficient and cost effective processing of large data volumes; (3) we integrated SARVIEWS with ASF's HyP3 system (http://hyp3.asf.alaska.edu/), providing functionality such as subscription creation via API or map interface as well as automatic email notification; (4) we automated the production chains for seismic and volcanic hazards by integrating SARVIEWS with the USGS earthquake notification service (ENS) and the USGS eruption alert system. Email notifications from both services are parsed and subscriptions are automatically created when certain event criteria are met; (5) finally, SARVIEWS-generated hazard products are now being made available to the public via the SARVIEWS hazard portal. These improvements have led to the expansion of SARVIEWS toward a broader set of hazard situations, now including volcanoes, earthquakes, and severe weather. We provide details on newly developed techniques and show examples of disasters for which SARVIEWS was invoked.

  14. A fast automatic target detection method for detecting ships in infrared scenes

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  15. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    NASA Astrophysics Data System (ADS)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  16. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  17. A Bispectral Composite Threshold Approach for Automatic Cloud Detection in VIIRS Imagery

    NASA Technical Reports Server (NTRS)

    LaFontaine Frank J.; Jedlovec, Gary J.

    2015-01-01

    The detection of clouds in satellite imagery has a number of important applications in weather and climate studies. The presence of clouds can alter the energy budget of the Earth-atmosphere system through scattering and absorption of shortwave radiation and the absorption and re-emission of infrared radiation at longer wavelengths. The scattering and absorption characteristics of clouds vary with the microphysical properties of clouds, hence the cloud type. Thus, detecting the presence of clouds over a region in satellite imagery is important in order to derive atmospheric or surface parameters that give insight into weather and climate processes. For many applications however, clouds are a contaminant whose presence interferes with retrieving atmosphere or surface information. In these cases, is important to isolate cloud-free pixels, used to retrieve atmospheric thermodynamic information or surface geophysical parameters, from cloudy ones. This abstract describes an application of a two-channel bispectral composite threshold (BCT) approach applied to VIIRS imagery. The simplified BCT approach uses only the 10.76 and 3.75 micrometer spectral channels from VIIRS in two spectral tests; a straight-forward infrared threshold test with the longwave channel and a shortwave - longwave channel difference test. The key to the success of this approach as demonstrated in past applications to GOES and MODIS data is the generation of temporally and spatially dependent thresholds used in the tests from a previous number of days at similar observations to the current data. The paper and subsequent presentation will present an overview of the approach and intercomparison results with other satellites, methods, and against verification data.

  18. Automatic airline baggage counting using 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  19. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation.

    PubMed

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-12-16

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  20. An incremental anomaly detection model for virtual machines.

    PubMed

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  1. An incremental anomaly detection model for virtual machines

    PubMed Central

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  2. The use of LIDAR Technology for Measuring Mixing Heights under the Photochemical Assessment Monitoring Program; leveraging research under the joint DISCOVER-AQ/FRAPPÉ Missions

    EPA Science Inventory

    The operational use of ceilometers across the United States has been limited to detection of cloud-base heights across the Automatic Surface Observing Systems (ASOS) primarily operated by the National Weather Service and the Federal Aviation Administration. Continued improvements...

  3. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    ERIC Educational Resources Information Center

    Sun, Shaohui

    2013-01-01

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain…

  4. Automatic analysis of stereoscopic satellite image pairs for determination of cloud-top height and structure

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.

    1991-01-01

    Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.

  5. Progressive data transmission for anatomical landmark detection in a cloud.

    PubMed

    Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D

    2012-01-01

    In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.

  6. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  7. Laser-based structural sensing and surface damage detection

    NASA Astrophysics Data System (ADS)

    Guldur, Burcu

    Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.

  8. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  9. Automated detection of Martian water ice clouds: the Valles Marineris

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu

    2016-10-01

    We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds

  10. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  11. A multiscale curvature algorithm for classifying discrete return LiDAR in forested environments

    Treesearch

    Jeffrey S. Evans; Andrew T. Hudak

    2007-01-01

    One prerequisite to the use of light detection and ranging (LiDAR) across disciplines is differentiating ground from nonground returns. The objective was to automatically and objectively classify points within unclassified LiDAR point clouds, with few model parameters and minimal postprocessing. Presented is an automated method for classifying LiDAR returns as ground...

  12. Individual Rocks Segmentation in Terrestrial Laser Scanning Point Cloud Using Iterative Dbscan Algorithm

    NASA Astrophysics Data System (ADS)

    Walicka, A.; Jóźków, G.; Borkowski, A.

    2018-05-01

    The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.

  13. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    PubMed Central

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  14. Department of Defense Chemical and Biological Defense Program. Volume I: Annual Report to Congress

    DTIC Science & Technology

    2002-04-01

    The M21 RSCAAL is an automatic scanning, passive infrared sensor that detects nerve ( GA , GB, and GD) and blister (H and L) agent vapor clouds based on...Point Detection GA - tabun, a nerve agent System GAO - General Accounting Office IPE - Individual Protective Equipment GAS - Group A Streptococcus...IPR - In-Process Review GB - sarin , a nerve agent IPT - Integrated Product Team GC - gas chromatography IR&D - Independent Research & Development GD

  15. Military Role in Countering Terrorist Use of Weapons of Mass Destruction

    DTIC Science & Technology

    1999-04-01

    chemical and biological mobile point detection. “The M21 Remote Sensing Chemical Agent Alarm (RSCAAL) is an automatic scanning, passive infrared sensor...The M21 detects nerve and blister agent clouds based on changes in the background infrared spectra caused by the presence of the agent vapor.”15...required if greater than 3 years since last vaccine. VEE Yes Multiple vaccines required. VHF No Botulism Yes SEB No Ricin No Mycotoxin s No Source

  16. The Optical Gravitational Lensing Experiment. Eclipsing Binary Stars in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M.; Zebrun, K.; Soszynski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.

    2003-03-01

    We present the catalog of 2580 eclipsing binary stars detected in 4.6 square degree area of the central parts of the Large Magellanic Cloud. The photometric data were collected during the second phase of the OGLE microlensing search from 1997 to 2000. The eclipsing objects were selected with the automatic search algorithm based on an artificial neural network. Basic statistics of eclipsing stars are presented. Also, the list of 36 candidates of detached eclipsing binaries for spectroscopic study and for precise LMC distance determination is provided. The full catalog is accessible from the OGLE Internet archive.

  17. The potential of using Landsat time-series to extract tropical dry forest phenology

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Helmer, E.

    2016-12-01

    Vegetation phenology is the timing of seasonal developmental stages in plant life cycles. Due to the persistent cloud cover in tropical regions, current studies often use satellite data with high frequency, such as AVHRR and MODIS, to detect vegetation phenology. However, the spatial resolution of these data is from 250 m to 1 km, which does not have enough spatial details and it is difficult to relate to field observations. To produce maps of phenology at a finer spatial resolution, this study explores the feasibility of using Landsat images to detect tropical forest phenology through reconstructing a high-quality, seasonal time-series of images, and tested it in Mona Island, Puerto Rico. First, an automatic method was applied to detect cloud and cloud shadow, and a spatial interpolator was use to retrieve pixels covered by clouds, shadows, and SLC-off gaps. Second, enhanced vegetation index time-series derived from the reconstructed Landsat images were used to detect 11 phenology variables. Detected phenology is consistent with field investigations, and its spatial pattern is consistent with the rainfall distribution on this island. In addition, we may expect that phenology should correlate with forest biophysical attributes, so 47 plots with field measurement of biophysical attributes were used to indirectly validate the phenology product. Results show that phenology variables can explain a lot of variations in biophysical attributes. This study suggests that Landsat time-series has great potential to detect phenology in tropical areas.

  18. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  19. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  20. Auspice: Automatic Service Planning in Cloud/Grid Environments

    NASA Astrophysics Data System (ADS)

    Chiu, David; Agrawal, Gagan

    Recent scientific advances have fostered a mounting number of services and data sets available for utilization. These resources, though scattered across disparate locations, are often loosely coupled both semantically and operationally. This loosely coupled relationship implies the possibility of linking together operations and data sets to answer queries. This task, generally known as automatic service composition, therefore abstracts the process of complex scientific workflow planning from the user. We have been exploring a metadata-driven approach toward automatic service workflow composition, among other enabling mechanisms, in our system, Auspice: Automatic Service Planning in Cloud/Grid Environments. In this paper, we present a complete overview of our system's unique features and outlooks for future deployment as the Cloud computing paradigm becomes increasingly eminent in enabling scientific computing.

  1. D Scanning of Live Pigs System and its Application in Body Measurements

    NASA Astrophysics Data System (ADS)

    Guo, H.; Wang, K.; Su, W.; Zhu, D. H.; Liu, W. L.; Xing, Ch.; Chen, Z. R.

    2017-09-01

    The shape of a live pig is an important indicator of its health and value, whether for breeding or for carcass quality. This paper implements a prototype system for live single pig body surface 3d scanning based on two consumer depth cameras, utilizing the 3d point clouds data. These cameras are calibrated in advance to have a common coordinate system. The live 3D point clouds stream of moving single pig is obtained by two Xtion Pro Live sensors from different viewpoints simultaneously. A novel detection method is proposed and applied to automatically detect the frames containing pigs with the correct posture from the point clouds stream, according to the geometric characteristics of pig's shape. The proposed method is incorporated in a hybrid scheme, that serves as the preprocessing step in a body measurements framework for pigs. Experimental results show the portability of our scanning system and effectiveness of our detection method. Furthermore, an updated this point cloud preprocessing software for livestock body measurements can be downloaded freely from https://github.com/LiveStockShapeAnalysis to livestock industry, research community and can be used for monitoring livestock growth status.

  2. UAS-based automatic bird count of a common gull colony

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G. J.

    2013-08-01

    The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.

  3. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    NASA Astrophysics Data System (ADS)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  4. Application of the SRI cloud-tracking technique to rapid-scan GOES observations

    NASA Technical Reports Server (NTRS)

    Wolf, D. E.; Endlich, R. M.

    1980-01-01

    An automatic cloud tracking system was applied to multilayer clouds associated with severe storms. The method was tested using rapid scan observations of Hurricane Eloise obtained by the GOES satellite on 22 September 1975. Cloud tracking was performed using clustering based either on visible or infrared data. The clusters were tracked using two different techniques. The data of 4 km and 8 km resolution of the automatic system yielded comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System.

  5. A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures

    NASA Astrophysics Data System (ADS)

    Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.

    2017-10-01

    An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.

  6. An automatic locating system for cloud-to-ground lightning. [which utilizes a microcomputer

    NASA Technical Reports Server (NTRS)

    Krider, E. P.; Pifer, A. E.; Uman, M. A.

    1980-01-01

    Automatic locating systems which respond to cloud to ground lightning and which discriminate against cloud discharges and background noise are described. Subsystems of the locating system, which include the direction finder and the position analyzer, are discussed. The direction finder senses the electromagnetic fields radiated by lightning on two orthogonal magnetic loop antennas and on a flat plate electric antenna. The position analyzer is a preprogrammed microcomputer system which automatically computes, maps, and records lightning locations in real time using data inputs from the direction finder. The use of the locating systems for wildfire management and fire weather forecasting is discussed.

  7. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops.

    PubMed

    Bengochea-Guevara, José M; Andújar, Dionisio; Sanchez-Sardana, Francisco L; Cantuña, Karla; Ribeiro, Angela

    2017-12-24

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, "on ground crop inspection" potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. "On ground monitoring" is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows.

  8. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops

    PubMed Central

    Andújar, Dionisio; Sanchez-Sardana, Francisco L.; Cantuña, Karla

    2017-01-01

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, “on ground crop inspection” potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. “On ground monitoring” is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows. PMID:29295536

  9. Automatic epileptic seizure detection in EEGs using MF-DFA, SVM based on cloud computing.

    PubMed

    Zhang, Zhongnan; Wen, Tingxi; Huang, Wei; Wang, Meihong; Li, Chunfeng

    2017-01-01

    Epilepsy is a chronic disease with transient brain dysfunction that results from the sudden abnormal discharge of neurons in the brain. Since electroencephalogram (EEG) is a harmless and noninvasive detection method, it plays an important role in the detection of neurological diseases. However, the process of analyzing EEG to detect neurological diseases is often difficult because the brain electrical signals are random, non-stationary and nonlinear. In order to overcome such difficulty, this study aims to develop a new computer-aided scheme for automatic epileptic seizure detection in EEGs based on multi-fractal detrended fluctuation analysis (MF-DFA) and support vector machine (SVM). New scheme first extracts features from EEG by MF-DFA during the first stage. Then, the scheme applies a genetic algorithm (GA) to calculate parameters used in SVM and classify the training data according to the selected features using SVM. Finally, the trained SVM classifier is exploited to detect neurological diseases. The algorithm utilizes MLlib from library of SPARK and runs on cloud platform. Applying to a public dataset for experiment, the study results show that the new feature extraction method and scheme can detect signals with less features and the accuracy of the classification reached up to 99%. MF-DFA is a promising approach to extract features for analyzing EEG, because of its simple algorithm procedure and less parameters. The features obtained by MF-DFA can represent samples as well as traditional wavelet transform and Lyapunov exponents. GA can always find useful parameters for SVM with enough execution time. The results illustrate that the classification model can achieve comparable accuracy, which means that it is effective in epileptic seizure detection.

  10. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  11. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef

    2013-12-01

    Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  12. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  13. 4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas

    2016-04-01

    The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.

  14. Automatic Cloud Detection from Multi-Temporal Satellite Images: Towards the Use of PLÉIADES Time Series

    NASA Astrophysics Data System (ADS)

    Champion, N.

    2012-08-01

    Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.

  15. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  16. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  17. Automatic cloud tracking applied to GOES and Meteosat observations

    NASA Technical Reports Server (NTRS)

    Endlich, R. M.; Wolf, D. E.

    1981-01-01

    An improved automatic processing method for the tracking of cloud motions as revealed by satellite imagery is presented and applications of the method to GOES observations of Hurricane Eloise and Meteosat water vapor and infrared data are presented. The method is shown to involve steps of picture smoothing, target selection and the calculation of cloud motion vectors by the matching of a group at a given time with its best likeness at a later time, or by a cross-correlation computation. Cloud motion computations can be made in as many as four separate layers simultaneously. For data of 4 and 8 km resolution in the eye of Hurricane Eloise, the automatic system is found to provide results comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System, with results obtained by the pattern recognition and cross correlation computations differing by only fractions of a pixel. For Meteosat water vapor data from the tropics and midlatitudes, the automatic motion computations are found to be reliable only in areas where the water vapor fields contained small-scale structure, although excellent results are obtained using Meteosat IR data in the same regions. The automatic method thus appears to be competitive in accuracy and coverage with motion determination by human analysts.

  18. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  19. COMP Superscalar, an interoperable programming framework

    NASA Astrophysics Data System (ADS)

    Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul

    2015-12-01

    COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.

  20. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  1. Design of a small laser ceilometer and visibility measuring device for helicopter landing sites

    NASA Astrophysics Data System (ADS)

    Streicher, Jurgen; Werner, Christian; Dittel, Walter

    2004-01-01

    Hardware development for remote sensing costs a lot of time and money. A virtual instrument based on software modules was developed to optimise a small visibility and cloud base height sensor. Visibility is the parameter describing the turbidity of the atmosphere. This can be done either by a mean value over a path measured by a transmissometer or for each point of the atmosphere like the backscattered intensity of a range resolved lidar measurement. A standard ceilometer detects the altitude of clouds by using the runtime of the laser pulse and the increasing intensity of the back scattered light when hitting the boundary of a cloud. This corresponds to hard target range finding, but with a more sensitive detection. The output of a standard ceilometer is in case of cloud coverage the altitude of one or more layers. Commercial cloud sensors are specified to track cloud altitude at rather large distances (100 m up to 10 km) and are therefore big and expensive. A virtual instrument was used to calculate the system parameters for a small system for heliports at hospitals and landing platforms under visual flight rules (VFR). Helicopter pilots need information about cloud altitude (base not below 500 feet) and/or the visibility conditions (visual range not lower than 600m) at the destinated landing point. Private pilots need this information too when approaching a non-commercial airport. Both values can be measured automatically with the developed small and compact prototype, at the size of a shoebox for a reasonable price.

  2. Climatology of cloud (radiative) parameters at two stations in Switzerland using hemispherical sky-cameras

    NASA Astrophysics Data System (ADS)

    Aebi, Christine; Gröbner, Julian; Kämpfer, Niklaus; Vuilleumier, Laurent

    2017-04-01

    Our study analyses climatologies of cloud fraction, cloud type and cloud radiative effect depending on different parameters at two stations in Switzerland. The calculations have been performed for shortwave (0.3 - 3 μm) and longwave (3 - 100 μm) radiation separately. Information about fractional cloud coverage and cloud type is automatically retrieved from images taken by visible all-sky cameras at the two stations Payerne (490 m asl) and Davos (1594 m asl) using a cloud detection algorithm developed by PMOD/WRC (Wacker et al., 2015). Radiation data are retrieved from pyranometers and pyrgeometers, the cloud base height from a ceilometer and IWV data from GPS measurements. Interestingly, Davos and Payerne show different trends in terms of cloud coverage and cloud fraction regarding seasonal variations. The absolute longwave cloud radiative effect (LCE) for low-level clouds and a cloud coverage of 8 octas has a median value between 61 and 72 Wm-2. It is shown that the fractional cloud coverage, the cloud base height (CBH) and integrated water vapour (IWV) all have an influence on the magnitude of the LCE and will be illustrated with key examples. The relative values of the shortwave cloud radiative effect (SCE) for low-level clouds and a cloud coverage of 8 octas are between -88 to -62 %. The SCE is also influenced by the latter parameters, but also if the sun is covered or not by clouds. At both stations situations of shortwave radiation cloud enhancements have been observed and will be discussed. Wacker S., J. Gröbner, C. Zysset, L. Diener, P. Tzoumanikas, A. Kazantzidis, L. Vuilleumier, R. Stöckli, S. Nyeki, and N. Kämpfer (2015) Cloud observations in Switzerland using hemispherical sky cameras, J. Geophys. Res. Atmos, 120, 695-707.

  3. Urban forest topographical mapping using UAV LIDAR

    NASA Astrophysics Data System (ADS)

    Putut Ash Shidiq, Iqbal; Wibowo, Adi; Kusratmoko, Eko; Indratmoko, Satria; Ardhianto, Ronni; Prasetyo Nugroho, Budi

    2017-12-01

    Topographical data is highly needed by many parties, such as government institution, mining companies and agricultural sectors. It is not just about the precision, the acquisition time and data processing are also carefully considered. In relation with forest management, a high accuracy topographic map is necessary for planning, close monitoring and evaluating forest changes. One of the solution to quickly and precisely mapped topography is using remote sensing system. In this study, we test high-resolution data using Light Detection and Ranging (LiDAR) collected from unmanned aerial vehicles (UAV) to map topography and differentiate vegetation classes based on height in urban forest area of University of Indonesia (UI). The semi-automatic and manual classifications were applied to divide point clouds into two main classes, namely ground and vegetation. There were 15,806,380 point clouds obtained during the post-process, in which 2.39% of it were detected as ground.

  4. Automatic Atlas Based Electron Density and Structure Contouring for MRI-based Prostate Radiation Therapy on the Cloud

    NASA Astrophysics Data System (ADS)

    Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.

    2014-03-01

    Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.

  5. Automatic Jet Contrail Detection and Segmentation

    NASA Technical Reports Server (NTRS)

    Weiss, J.; Christopher, S. A.; Welch, R. M.

    1997-01-01

    Jet contrails are an important subset of cirrus clouds in the atmosphere, and thin cirrus are thought to enhance the greenhouse effect due to their semi-transparent nature. They are nearly transparent to the solar energy reaching the surface, but they reduce the planetary emission to space due to their cold ambient temperatures. Having 'seeded' the environment, contrails often elongate and widen into cirrus-like features. However, there is great uncertainty regarding the impact of contrails on surface temperature and precipitation. With increasing numbers of subsonic aircraft operating in the upper troposphere, there is the possibility of increasing cloudiness which could lead to changes in the radiation balance. Automatic detection and seg- mentation of jet contrails in satellite imagery is important because (1) it is impractical to compile a contrail climatology by hand, and (2) with the segmented images it will be possible to retrieve contrail physical properties such as optical thickness, effective ice crystal diameter and emissivity.

  6. Cloud masking and removal in remote sensing image time series

    NASA Astrophysics Data System (ADS)

    Gómez-Chova, Luis; Amorós-López, Julia; Mateo-García, Gonzalo; Muñoz-Marí, Jordi; Camps-Valls, Gustau

    2017-01-01

    Automatic cloud masking of Earth observation images is one of the first required steps in optical remote sensing data processing since the operational use and product generation from satellite image time series might be hampered by undetected clouds. The high temporal revisit of current and forthcoming missions and the scarcity of labeled data force us to cast cloud screening as an unsupervised change detection problem in the temporal domain. We introduce a cloud screening method based on detecting abrupt changes along the time dimension. The main assumption is that image time series follow smooth variations over land (background) and abrupt changes will be mainly due to the presence of clouds. The method estimates the background surface changes using the information in the time series. In particular, we propose linear and nonlinear least squares regression algorithms that minimize both the prediction and the estimation error simultaneously. Then, significant differences in the image of interest with respect to the estimated background are identified as clouds. The use of kernel methods allows the generalization of the algorithm to account for higher-order (nonlinear) feature relations. After the proposed cloud masking and cloud removal, cloud-free time series at high spatial resolution can be used to obtain a better monitoring of land cover dynamics and to generate more elaborated products. The method is tested in a dataset with 5-day revisit time series from SPOT-4 at high resolution and with Landsat-8 time series. Experimental results show that the proposed method yields more accurate cloud masks when confronted with state-of-the-art approaches typically used in operational settings. In addition, the algorithm has been implemented in the Google Earth Engine platform, which allows us to access the full Landsat-8 catalog and work in a parallel distributed platform to extend its applicability to a global planetary scale.

  7. Application of an automatic cloud tracking technique to Meteosat water vapor and infrared observations

    NASA Technical Reports Server (NTRS)

    Endlich, R. M.; Wolf, D. E.

    1980-01-01

    The automatic cloud tracking system was applied to METEOSAT 6.7 micrometers water vapor measurements to learn whether the system can track the motions of water vapor patterns. Data for the midlatitudes, subtropics, and tropics were selected from a sequence of METEOSAT pictures for 25 April 1978. Trackable features in the water vapor patterns were identified using a clustering technique and the features were tracked by two different methods. In flat (low contrast) water vapor fields, the automatic motion computations were not reliable, but in areas where the water vapor fields contained small scale structure (such as in the vicinity of active weather phenomena) the computations were successful. Cloud motions were computed using METEOSAT infrared observations (including tropical convective systems and midlatitude jet stream cirrus).

  8. Rheticus Displacement: an Automatic Geo-Information Service Platform for Ground Instabilities Detection and Monitoring

    NASA Astrophysics Data System (ADS)

    Chiaradia, M. T.; Samarelli, S.; Agrimano, L.; Lorusso, A. P.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2016-12-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (service-oriented-architecture) model. Due to its architecture, where every functionality is well defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. Rheticus offers a portfolio of services, ranging from the detection and monitoring of geohazards and infrastructural instabilities, to marine water quality monitoring, wildfires detection or land cover monitoring. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (i.e., SPINUA), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub; S1 data are then handled by a mature running processing chain, which is responsible of producing displacement maps immediately usable to measure with sub-centimetric precision movements of coherent points. Examples are provided, concerning the automatic displacement map generation process, as well as the integration of point and distributed scatterers, the integration of multi-sensors displacement maps (e.g., Sentinel-1 IW and COSMO-SkyMed HIMAGE), the combination of displacement rate maps acquired along both ascending and descending passes. ACK: Study carried out in the framework of the FAST4MAP project and co-funded by the Italian Space Agency (Contract n. 2015-020-R.0). Sentinel-1A products provided by ESA. CSK® Products, ASI, provided by ASI under a license to use. Rheticus® is a registered trademark of Planetek Italia srl.

  9. Developing Normal Turns-Amplitude Clouds for Upper and Lower Limbs.

    PubMed

    Jabre, Joe F; Nikolayev, Sergey G; Babayev, Michael B; Chindilov, Denis V; Muravyov, Anatoly Y

    2016-10-01

    Turns and amplitude analysis (T&A) is a frequently used method for automatic EMG interference pattern analysis. The T&A normal values have only been developed for a limited number of muscles. Our objective was to obtain normal T&A clouds for upper and lower extremity muscles for which no normal values exist in the literature. The T&A normative data using concentric needle electrodes were obtained from 68 men and 56 women aged 20 to 60 years. Normal upper and lower extremity T&A clouds were obtained and presented in this article. The T&A normal values collected in this study maybe used to detect neurogenic and myopathic abnormalities in men and women at low-to-moderate muscle contractions. The effect of turns-amplitude data obtained at high force level of muscle contraction and its potential to falsely show neurogenic abnormalities are discussed.

  10. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  11. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  12. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  13. Structure Line Detection from LIDAR Point Clouds Using Topological Elevation Analysis

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.; Chen, L. C.

    2012-07-01

    Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA) to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  14. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  15. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  16. Classification of cloud fields based on textural characteristics

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1987-01-01

    The present study reexamines the applicability of texture-based features for automatic cloud classification using very high spatial resolution (57 m) Landsat multispectral scanner digital data. It is concluded that cloud classification can be accomplished using only a single visible channel.

  17. I-SCAD® standoff chemical agent detector overview

    NASA Astrophysics Data System (ADS)

    Popa, Mirela O.; Griffin, Matthew T.

    2012-06-01

    This paper presents a system-level description of the I-SCAD® Standoff Chemical Agent Detector, a passive Fourier Transform InfraRed (FTIR) based remote sensing system, for detecting chemical vapor threats. The passive infrared detection system automatically searches the 7 to 14 micron region of the surrounding atmosphere for agent vapor clouds. It is capable of operating while on the move to accomplish reconnaissance, surveillance, and contamination avoidance missions. Additionally, the system is designed to meet the needs for application on air and sea as well as ground mobile and fixed site platforms. The lightweight, passive, and fully automatic detection system scans the surrounding atmosphere for chemical warfare agent vapors. It provides on-the-move, 360-deg coverage from a variety of tactical and reconnaissance platforms at distances up to 5 km. The core of the system is a rugged Michelson interferometer with a flexure spring bearing mechanism and bi-directional data acquisition capability. The modular system design facilitates interfacing to many platforms. A Reduced Field of View (RFOV) variant includes novel modifications to the scanner subcomponent assembly optical design that gives extended performance in detection range and detection probability without sacrificing existing radiometric sensitivity performance. This paper will deliver an overview of system.

  18. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  19. Rheticus: a cloud-based Geo-Information Service for the Detection and Monitoring of Geohazards and Infrastructural Instabilities

    NASA Astrophysics Data System (ADS)

    Chiaradia, M. T.; Samarelli, S.; Massimi, V.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2017-12-01

    Geospatial information is today essential for organizations and professionals working in several industries. More and more, huge information is collected from multiple data sources and is freely available to anyone as open data. Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and, if appropriate, a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (Service-Oriented-Architecture) model. Due to its spread architecture, where every functionality is defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. This approach makes the system very flexible with respect to the services implementation, ensuring the ability to rethink and redesign the whole process with little effort. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (MTInSAR), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub. S1 data are then processed by SPINUA (Stable Point Interferometry even in Unurbanized Areas), a robust MTInSAR algorithm, which is responsible of producing displacement maps immediately usable to measure movements of point and distributed scatterers, with sub-centimetric precision. We outline the automatic generation process of displacement maps and we provide examples of the detection and monitoring of geohazard and infrastructure instabilities. ACK: Rheticus® is a registered trademark of Planetek Italia srl. Study carried out in the framework of the FAST4MAP project (ASI Contract n. 2015-020-R.0). Sentinel-1A products provided by ESA.

  20. Satellite-based overshooting top detection methods and an analysis of correlated weather conditions

    NASA Astrophysics Data System (ADS)

    Mikuš, Petra; Strelec Mahović, Nataša

    2013-04-01

    The paper addresses two topics: the possibilities of satellite-based automatic detection of overshooting convective cloud tops and the connection between the overshootings and the occurrence of severe weather on the ground. Because the use of visible images is restricted to daytime, four detection methods based on the Meteosat Second Generation SEVIRI 10.8 μm infra-red window channel and the absorption channels of water vapor (6.2 μm), ozone (9.7 μm) and carbon dioxide (13.4 μm) in the form of brightness temperature differences were used. The theoretical background of all four methods is explained, and the detection results are compared with daytime high-resolution visible (HRV) satellite images to validate each method. Of the four tested methods, the best performance is found for the combination of brightness temperature differences 6.2-10.8 and 9.7-10.8 μm, which are correlated to overshootings in HRV images in 80% of the cases. The second part of the research is focused on determining whether the appearance of the overshooting top, a manifestation of a very strong updraft in the cloud, can be connected to an abrupt change of certain weather elements on the ground. For all overshooting tops found by the above-mentioned combined method, automatic station data within the range of 0.1° and available hail observations within 0.2° were analyzed. The results show that the overshootings are connected to precipitation in 80% and to wind gusts in 70% of the cases; in contrast, a slightly lower correlation was found for temperature and humidity changes. Hail is observed in the vicinity of the overshooting in 38% of the cases.

  1. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  2. Day/night whole sky imagers for 24-h cloud and sky assessment: history and overview.

    PubMed

    Shields, Janet E; Karr, Monette E; Johnson, Richard W; Burden, Art R

    2013-03-10

    A family of fully automated digital whole sky imagers (WSIs) has been developed at the Marine Physical Laboratory over many years, for a variety of research and military applications. The most advanced of these, the day/night whole sky imagers (D/N WSIs), acquire digital imagery of the full sky down to the horizon under all conditions from full sunlight to starlight. Cloud algorithms process the imagery to automatically detect the locations of cloud for both day and night. The instruments can provide absolute radiance distribution over the full radiance range from starlight through daylight. The WSIs were fielded in 1984, followed by the D/N WSIs in 1992. These many years of experience and development have resulted in very capable instruments and algorithms that remain unique. This article discusses the history of the development of the D/N WSIs, system design, algorithms, and data products. The paper cites many reports with more detailed technical documentation. Further details of calibration, day and night algorithms, and cloud free line-of-sight results will be discussed in future articles.

  3. Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.

    2018-04-01

    Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  4. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  5. ASTER cloud coverage reassessment using MODIS cloud mask products

    NASA Astrophysics Data System (ADS)

    Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli

    2010-10-01

    In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.

  6. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  7. Object Detection from MMS Imagery Using Deep Learning for Generation of Road Orthophotos

    NASA Astrophysics Data System (ADS)

    Li, Y.; Sakamoto, M.; Shinohara, T.; Satoh, T.

    2018-05-01

    In recent years, extensive research has been conducted to automatically generate high-accuracy and high-precision road orthophotos using images and laser point cloud data acquired from a mobile mapping system (MMS). However, it is necessary to mask out non-road objects such as vehicles, bicycles, pedestrians and their shadows in MMS images in order to eliminate erroneous textures from the road orthophoto. Hence, we proposed a novel vehicle and its shadow detection model based on Faster R-CNN for automatically and accurately detecting the regions of vehicles and their shadows from MMS images. The experimental results show that the maximum recall of the proposed model was high - 0.963 (intersection-over-union > 0.7) - and the model could identify the regions of vehicles and their shadows accurately and robustly from MMS images, even when they contain varied vehicles, different shadow directions, and partial occlusions. Furthermore, it was confirmed that the quality of road orthophoto generated using vehicle and its shadow masks was significantly improved as compared to those generated using no masks or using vehicle masks only.

  8. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  9. Department of Defense Chemical, Biological, Defense Program, Annual Report to Congress, March 2005

    DTIC Science & Technology

    2005-03-01

    nerve agents ( GA , GB, GD, and GF), V type nerve agents , and H (mustard) type blister agents . M8 paper can identify agents through...The M21 RSCAAL is an automatic scanning, passive infrared sensor that detects nerve ( GA , GB, and GD) and blister (H and L) agent vapor clouds...Chief of Staff for Programs GA – tabun, a nerve agent GAO – General Accounting Office GB – sarin, a nerve agent GD – soman, a nerve

  10. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal is to investigate if the application of state-of-the-art 3D digitizing, data processing, and visualization technologies support the interpretation of this paleontological site. The obtained 3D data (approx. 1 billion points at the respective area) is analyzed with respect to their 3D structure in order to derive geometrical information. The aim of this contribution is to segment the 3D point cloud of laser scanning data into meaningful regions representing particular objects. Geometric parameters (curvature, tangent plane orientation, local minimum and maximum, etc.) are derived for every 3D point of the point cloud. A set of features is computed in each point using different kernel sizes to define neighborhoods of different size. This provides information on convexity (outer surface), concavity (inner surface) and locally flat areas, which shall be further utilized in fitting model of Crassostrea-shells. In addition, digitizing is performed manually in order to obtain a representative set of reference data for the evaluation of the obtained results. For evaluating these results the reference data (length and orientation of specimen) is then compared to the automatically derived segments of the point cloud. The study is supported by the Austrian Science Fund (FWF P 25883-N29).

  11. a Cloud-Based Architecture for Smart Video Surveillance

    NASA Astrophysics Data System (ADS)

    Valentín, L.; Serrano, S. A.; Oves García, R.; Andrade, A.; Palacios-Alonso, M. A.; Sucar, L. Enrique

    2017-09-01

    Turning a city into a smart city has attracted considerable attention. A smart city can be seen as a city that uses digital technology not only to improve the quality of people's life, but also, to have a positive impact in the environment and, at the same time, offer efficient and easy-to-use services. A fundamental aspect to be considered in a smart city is people's safety and welfare, therefore, having a good security system becomes a necessity, because it allows us to detect and identify potential risk situations, and then take appropriate decisions to help people or even prevent criminal acts. In this paper we present an architecture for automated video surveillance based on the cloud computing schema capable of acquiring a video stream from a set of cameras connected to the network, process that information, detect, label and highlight security-relevant events automatically, store the information and provide situational awareness in order to minimize response time to take the appropriate action.

  12. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  13. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  14. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  15. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  16. An empirical method to correct for temperature-dependent variations in the overlap function of CHM15k ceilometers

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Poltera, Yann; Haefele, Alexander

    2016-07-01

    Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.

  17. Ship detection in optical remote sensing images based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen

    2017-10-01

    Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.

  18. Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing)

    DTIC Science & Technology

    2016-04-01

    of Promises with Cloud • Cost efficiency • Unlimited storage • Backup and recovery • Automatic software integration • Easy access to information...activities that wrap the actual exercise itself (e.g., travel for exercise support, data collection, integration , etc.). Cloud -based simulation would...requiring quick delivery rather than fewer large messages requiring high bandwidth. Cloud environments tend to be better at providing high-bandwidth

  19. Automatic Building Abstraction from Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Ley, A.; Hänsch, R.; Hellwich, O.

    2017-09-01

    Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.

  20. Using airborne LiDAR in geoarchaeological contexts: Assessment of an automatic tool for the detection and the morphometric analysis of grazing archaeological structures (French Massif Central).

    NASA Astrophysics Data System (ADS)

    Roussel, Erwan; Toumazet, Jean-Pierre; Florez, Marta; Vautier, Franck; Dousteyssier, Bertrand

    2014-05-01

    Airborne laser scanning (ALS) of archaeological regions of interest is nowadays a widely used and established method for accurate topographic and microtopographic survey. The penetration of the vegetation cover by the laser beam allows the reconstruction of reliable digital terrain models (DTM) of forested areas where traditional prospection methods are inefficient, time-consuming and non-exhaustive. The ALS technology provides the opportunity to discover new archaeological features hidden by vegetation and provides a comprehensive survey of cultural heritage sites within their environmental context. However, the post-processing of LiDAR points clouds produces a huge quantity of data in which relevant archaeological features are not easily detectable with common visualizing and analysing tools. Undoubtedly, there is an urgent need for automation of structures detection and morphometric extraction techniques, especially for the "archaeological desert" in densely forested areas. This presentation deals with the development of automatic detection procedures applied to archaeological structures located in the French Massif Central, in the western forested part of the Puy-de-Dôme volcano between 950 and 1100 m a.s.l.. These unknown archaeological sites were discovered by the March 2011 ALS mission and display a high density of subcircular depressions with a corridor access. The spatial organization of these depressions vary from isolated to aggregated or aligned features. Functionally, they appear to be former grazing constructions built from the medieval to the modern period. Similar grazing structures are known in other locations of the French Massif Central (Sancy, Artense, Cézallier) where the ground is vegetation-free. In order to develop a reliable process of automatic detection and mapping of these archaeological structures, a learning zone has been delineated within the ALS surveyed area. The grazing features were mapped and typical morphometric attributes were calculated based on 2 methods: (i) The mapping of the archaeological structures by a human operator using common visualisation tools (DTM, multi-direction hillshading & local relief models) within a GIS environment; (ii) The automatic detection and mapping performed by a recognition algorithm based on a user defined geometric pattern of the grazing structures. The efficiency of the automatic tool has been assessed by comparing the number of structures detected and the morphometric attributes calculated by the two methods. Our results indicate that the algorithm is efficient for the detection and the location of grazing structures. Concerning the morphometric results, there is still a discrepancy between automatic and expert calculations, due to both the expert mapping choices and the algorithm calibration.

  1. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  2. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  3. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  4. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    PubMed

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  5. Results of land cover change detection analysis in and around Cordillera Azul National Park, Peru

    USGS Publications Warehouse

    Sleeter, Benjamin M.; Halsing, David L.

    2005-01-01

    The first product of the Optimizing Design and Management of Protected Areas for Conservation Project is a land cover change detection analysis based on Landsat thematic mapper (TM) and enhanced thematic mapper plus (ETM+) imagery collected at intervals between 1989 and 2002. The goal of this analysis was to quantify and analyze patterns of forest clearing, land conversion, and other disturbances in and around the Cordillera Azul National Park in Peru. After removing clouds and cloud shadows from the imagery using a series of automatic and manual processes, a Tasseled Cap Transformation was used to detect pixels of high reflectance, which were classified as bare ground and areas of likely forest clearing. Results showed a slow but steady increase in cleared ground prior to 1999 and a rapid and increasing conversion rate after that time. The highest concentrations of clearings have spread upward from the western border of the study area on the Huallaga River. To date, most disturbances have taken place in the buffer zone around the park, not within it, but the data show dense clearings occurring closer to the park border each year.

  6. Linear segmentation algorithm for detecting layer boundary with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  7. Graz kHz SLR LIDAR: first results

    NASA Astrophysics Data System (ADS)

    Kirchner, Georg; Koidl, Franz; Kucharski, Daniel; Pachler, Walther; Seiss, Matthias; Leitgeb, Erich

    2009-05-01

    The Satellite Laser Ranging (SLR) Station Graz is measuring routinely distances to satellites with a 2 kHz laser, achieving an accuracy of 2-3 mm. Using this available equipment, we developed - and added as a byproduct - a kHz SLR LIDAR for the Graz station: Photons of each transmitted laser pulse are backscattered from clouds, atmospheric layers, aircraft vapor trails etc. An additional 10 cm diameter telescope - installed on our main telescope mount - and a Single- Photon Counting Module (SPCM) detect these photons. Using an ISA-Bus based FPGA card - developed in Graz for the kHz SLR operation - these detection times are stored with 100 ns resolution (15 m slots in distance). Event times of any number of laser shots can be accumulated in up to 4096 counters (according to > 60 km distance). The LIDAR distances are stored together with epoch time and telescope pointing information; any reflection point is therefore determined with 3D coordinates, with 15 m resolution in distance, and with the angular precision of the laser telescope pointing. First test results to clouds in full daylight conditions - accumulating up to several 100 laser shots per measurement - yielded high LIDAR data rates (> 100 points per second) and excellent detection of clouds (up to 10 km distance at the moment). Our ultimate goal is to operate the LIDAR automatically and in parallel with the standard SLR measurements, during day and night, collecting LIDAR data as a byproduct, and without any additional expenses.

  8. Sentinel-1 Archive and Processing in the Cloud using the Hybrid Pluggable Processing Pipeline (HyP3) at the ASF DAAC

    NASA Astrophysics Data System (ADS)

    Arko, S. A.; Hogenson, R.; Geiger, A.; Herrmann, J.; Buechler, B.; Hogenson, K.

    2016-12-01

    In the coming years there will be an unprecedented amount of SAR data available on a free and open basis to research and operational users around the globe. The Alaska Satellite Facility (ASF) DAAC hosts, through an international agreement, data from the Sentinel-1 spacecraft and will be hosting data from the upcoming NASA ISRO SAR (NISAR) mission. To more effectively manage and exploit these vast datasets, ASF DAAC has begun moving portions of the archive to the cloud and utilizing cloud services to provide higher-level processing on the data. The Hybrid Pluggable Processing Pipeline (HyP3) project is designed to support higher-level data processing in the cloud and extend the capabilities of researchers to larger scales. Built upon a set of core Amazon cloud services, the HyP3 system allows users to request data processing using a number of canned algorithms or their own algorithms once they have been uploaded to the cloud. The HyP3 system automatically accesses the ASF cloud-based archive through the DAAC RESTful application programming interface and processes the data on Amazon's elastic compute cluster (EC2). Final products are distributed through Amazon's simple storage service (S3) and are available for user download. This presentation will provide an overview of ASF DAAC's activities moving the Sentinel-1 archive into the cloud and developing the integrated HyP3 system, covering both the benefits and difficulties of working in the cloud. Additionally, we will focus on the utilization of HyP3 for higher-level processing of SAR data. Two example algorithms, for sea-ice tracking and change detection, will be discussed as well as the mechanism for integrating new algorithms into the pipeline for community use.

  9. A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor.

    PubMed

    Madrigal, Carlos A; Branch, John W; Restrepo, Alejandro; Mery, Domingo

    2017-10-02

    Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.

  10. A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor

    PubMed Central

    Branch, John W.

    2017-01-01

    Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%. PMID:28974037

  11. Automatic Commercial Permit Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grana, Paul

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  12. Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces

    NASA Astrophysics Data System (ADS)

    Theiler, P. W.; Schindler, K.

    2012-07-01

    Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  13. Hough transform as a tool support building roof detection. (Polish Title: Transformata Hough'a jako narzędzie wspomagające wykrywanie dachów budynków)

    NASA Astrophysics Data System (ADS)

    Borowiec, N.

    2013-12-01

    Gathering information about the roof shapes of the buildings is still current issue. One of the many sources from which we can obtain information about the buildings is the airborne laser scanning. However, detect information from cloud o points about roofs of building automatically is still a complex task. You can perform this task by helping the additional information from other sources, or based only on Lidar data. This article describes how to detect the building roof only from a point cloud. To define the shape of the roof is carried out in three tasks. The first step is to find the location of the building, the second is the precise definition of the edge, while the third is an indication of the roof planes. First step based on the grid analyses. And the next two task based on Hough Transformation. Hough transformation is a method of detecting collinear points, so a perfect match to determine the line describing a roof. To properly determine the shape of the roof is not enough only the edges, but it is necessary to indicate roofs. Thus, in studies Hough Transform, also served as a tool for detection of roof planes. The only difference is that the tool used in this case is a three-dimensional.

  14. Edited Synoptic Cloud Reports from Ships and Land Stations Over the Globe, 1982-1991 (NDP-026B)

    DOE Data Explorer

    Hahn, Carole J. [University of Arizona; Warren, Stephen G. [University of Washington; London, Julius [University of Colorado

    1996-01-01

    Surface synoptic weather reports for the entire globe for the 10-year period from December 1981 through November 1991 have been processed, edited, and rewritten to provide a data set designed for use in cloud analyses. The information in these reports relating to clouds, including the present weather information, was extracted and put through a series of quality control checks. Reports not meeting certain quality control standards were rejected, as were reports from buoys and automatic weather stations. Correctable inconsistencies within reports were edited for consistency, so that the "edited cloud report" can be used for cloud analysis without further quality checking. Cases of "sky obscured" were interpreted by reference to the present weather code as to whether they indicated fog, rain or snow and were given appropriate cloud type designations. Nimbostratus clouds, which are not specifically coded for in the standard synoptic code, were also given a special designation. Changes made to an original report are indicated in the edited report so that the original report can be reconstructed if desired. While low cloud amount is normally given directly in the synoptic report, the edited cloud report also includes the amounts, either directly reported or inferred, of middle and high clouds, both the non-overlapped amounts and the "actual" amounts (which may be overlapped). Since illumination from the moon is important for the adequate detection of clouds at night, both the relative lunar illuminance and the solar altitude are given, as well as a parameter that indicates whether our recommended illuminance criterion was satisfied. This data set contains 124 million reports from land stations and 15 million reports from ships. Each report is 56 characters in length. The archive consists of 240 files, one file for each month of data for land and ocean separately. With this data set a user can develop a climatology for any particular cloud type or group of types, for any geographical region and any spatial and temporal resolution desired.

  15. CloudSat Reflectivity Data Visualization Inside Hurricanes

    NASA Technical Reports Server (NTRS)

    Suzuki, Shigeru; Wright, John R.; Falcon, Pedro C.

    2011-01-01

    We have presented methods to rapidly produce visualization and outreach products from CloudSat data for science and the media These methods combine data from several sources in the product generation process In general, the process can be completely automatic, producing products and notifying potential users

  16. Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.

    2017-09-01

    The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16  points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.

  17. Deep Learning for Discovery of Atmospheric Mountain Waves in MODIS and GPS Data

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Li, J. D.; Rude, C. M.; Gowanlock, M.; Herring, T.

    2017-12-01

    Airflow over mountains can produce gravity waves, called lee waves, which can generate atmospheric turbulence. Since this turbulence poses dangers to aviation, it is critical to identify such regions reliably in an automated fashion. This work leverages two sources of data to go beyond an ad-hoc human visual approach for such identification: MODIS imagery containing cloud patterns formed by lee waves, and patterns in GPS signals resulting from the transmission through atmospheric turbulence due to lee waves. We demonstrate a novel machine learning approach that fuses these two data types to detect atmospheric turbulence associated with lee waves. A convolutional neural network is trained on MODIS tile images to automatically classify the lee wave cloud patterns with 96% correct classifications on a validation set of 20,000 MODIS 64x64 tiles over a test region in the Sierra Nevada Mountains. Signals from GPS stations of the Plate Boundary Observatory are used for feature extraction related to lee waves, in order to improve the confidence of a detection in the MODIS imagery at a given position. To our knowledge, this is the first technique to combine these images and time series data types to improve the spatial and temporal resolutions for large-scale measurements of lee wave formations. First results of this work show great potential for improving weather condition monitoring, hazard and cloud pattern detection, as well as GPS navigation uncertainties. We acknowledge support from NASA AISTNNX15AG84G (PI Pankratius), NASA NNX14AQ03G (PI Herring), and NSF ACI1442997 (PI Pankratius).

  18. Design of a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oriented clustering case-based reasoning mechanism.

    PubMed

    Ku, Hao-Hsiang

    2015-01-01

    Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Garzoglio, Gabriele; Ren, Shangping

    FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VMmore » launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.« less

  20. New optical package and algorithms for accurate estimation and interactive recording of the cloud cover information over land and sea

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail; Sinitsyn, Alexey; Gulev, Sergey

    2014-05-01

    Cloud fraction is a critical parameter for the accurate estimation of short-wave and long-wave radiation - one of the most important surface fluxes over sea and land. Massive estimates of the total cloud cover as well as cloud amount for different layers of clouds are available from visual observations, satellite measurements and reanalyses. However, these data are subject of different uncertainties and need continuous validation against highly accurate in-situ measurements. Sky imaging with high resolution fish eye camera provides an excellent opportunity for collecting cloud cover data supplemented with additional characteristics hardly available from routine visual observations (e.g. structure of cloud cover under broken cloud conditions, parameters of distribution of cloud dimensions). We present operational automatic observational package which is based on fish eye camera taking sky images with high resolution (up to 1Hz) in time and a spatial resolution of 968x648px. This spatial resolution has been justified as an optimal by several sensitivity experiments. For the use of the package at research vessel when the horizontal positioning becomes critical, a special extension of the hardware and software to the package has been developed. These modules provide the explicit detection of the optimal moment for shooting. For the post processing of sky images we developed a software realizing the algorithm of the filtering of sunburn effect in case of small and moderate could cover and broken cloud conditions. The same algorithm accurately quantifies the cloud fraction by analyzing color mixture for each point and introducing the so-called "grayness rate index" for every pixel. The accuracy of the algorithm has been tested using the data collected during several campaigns in 2005-2011 in the North Atlantic Ocean. The collection of images included more than 3000 images for different cloud conditions supplied with observations of standard parameters. The system is fully autonomous and has a block for digital data collection at the hard disk. The system has been tested for a wide range of open ocean cloud conditions and we will demonstrate some pilot results of data processing and physical interpretation of fractional cloud cover estimation.

  1. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    NASA Technical Reports Server (NTRS)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  2. A cloud shadow detection method combined with cloud height iteration and spectral analysis for Landsat 8 OLI data

    NASA Astrophysics Data System (ADS)

    Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying

    2018-04-01

    Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.

  3. What is the role of laminar cirrus cloud on regulating the cross-tropopause water vapor transport?

    NASA Astrophysics Data System (ADS)

    Wu, D. L.; Gong, J.; Tsai, V.

    2016-12-01

    Laminar cirrus is an extremely thin ice cloud found persistently inhabit in the tropical and subtropical tropopause. Due to its sub-visible optical depth and high formation altitude, knowledge about the characteristics of this special type of cloud is very limited, and debates are ongoing about its role on regulating the cross-tropopause transport of water vapor. The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) onboard the CALIPSO satellite has been continuously providing us with unprecedented details of the laminar cirrus since its launch in 2006. In this research, we adapted Winker and Trepte (1998)'s eyeball detection method. A JAVA-based applet and graphical user interface (GUI) is developed to manually select the laminar, which then automatically record the cloud properties, such as spatial location, shape, thickness, tilt angle, and whether its isolated or directly above a deep convective cloud. Monthly statistics of the laminar cirrus are then separately analyzed according to the orbit node, isolated/convective, banded/non-banded, etc. Monthly statistics support a diurnal difference in the occurring frequency and formation height of the laminar cirrus. Also, isolated and convective laminars show diverse behaviors (height, location, distribution, etc.), which strongly implies that their formation mechanisms and their roles on depleting the upper troposphere water vapor are distinct. We further study the relationship between laminar characteristics and collocated and coincident water vapor gradient measurements from Aura Microwave Limb Sounder (MLS) observations below and above the laminars. The identified relationship provides a quantitative answer to the role laminar cirrus plays on regulating the water vapor entering the stratosphere.

  4. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  5. Performance Analysis of a Pole and Tree Trunk Detection Method for Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H.

    2011-09-01

    Dense point clouds can be collected efficiently from large areas using mobile laser scanning (MLS) technology. Accurate MLS data can be used for detailed 3D modelling of the road surface and objects around it. The 3D models can be utilised, for example, in street planning and maintenance and noise modelling. Utility poles, traffic signs, and lamp posts can be considered an important part of road infrastructure. Poles and trees stand out from the environment and should be included in realistic 3D models. Detection of narrow vertical objects, such as poles and tree trunks, from MLS data was studied. MLS produces huge amounts of data and, therefore, processing methods should be as automatic as possible and for the methods to be practical, the algorithms should run in an acceptable time. The automatic pole detection method tested in this study is based on first finding point clusters that are good candidates for poles and then separating poles and tree trunks from other clusters using features calculated from the clusters and by applying a mask that acts as a model of a pole. The method achieved detection rates of 77.7% and 69.7% in the field tests while 81.0% and 86.5% of the detected targets were correct. Pole-like targets that were surrounded by other objects, such as tree trunks that were inside branches, were the most difficult to detect. Most of the false detections came from wall structures, which could be corrected in further processing.

  6. Towards a social and context-aware multi-sensor fall detection and risk assessment platform.

    PubMed

    De Backere, F; Ongenae, F; Van den Abeele, F; Nelis, J; Bonte, P; Clement, E; Philpott, M; Hoebeke, J; Verstichel, S; Ackaert, A; De Turck, F

    2015-09-01

    For elderly people fall incidents are life-changing events that lead to degradation or even loss of autonomy. Current fall detection systems are not integrated and often associated with undetected falls and/or false alarms. In this paper, a social- and context-aware multi-sensor platform is presented, which integrates information gathered by a plethora of fall detection systems and sensors at the home of the elderly, by using a cloud-based solution, making use of an ontology. Within the ontology, both static and dynamic information is captured to model the situation of a specific patient and his/her (in)formal caregivers. This integrated contextual information allows to automatically and continuously assess the fall risk of the elderly, to more accurately detect falls and identify false alarms and to automatically notify the appropriate caregiver, e.g., based on location or their current task. The main advantage of the proposed platform is that multiple fall detection systems and sensors can be integrated, as they can be easily plugged in, this can be done based on the specific needs of the patient. The combination of several systems and sensors leads to a more reliable system, with better accuracy. The proof of concept was tested with the use of the visualizer, which enables a better way to analyze the data flow within the back-end and with the use of the portable testbed, which is equipped with several different sensors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    NASA Astrophysics Data System (ADS)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  8. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    ERIC Educational Resources Information Center

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  9. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  10. Validation of satellite-based CI detection of convective storms via backward trajectories

    NASA Astrophysics Data System (ADS)

    Dietzsch, Felix; Senf, Fabian; Deneke, Hartwig

    2013-04-01

    Within this study, the rapid development and evolution of several severe convective events is investigated based on geostationary satellite images, and is related to previous findings on suitable detection thresholds for convective initiation. Nine severe events have been selected that occurred over Central Europe in summer 2012, and have been classified into the categories supercell, mesoscale convective system, frontal system and orographic convection. The cases are traced backward starting from the fully developed convective systems to its very beginning initial state using ECMWF data with 0.5 degree spatial resolution and 3h temporal resolution. For every case the storm life cycle was quantified through the storm's infrared (IR) brightness temperatures obtained from Meteosat Second Generation SEVIRI with 5 min temporal resolution and 4.5 km spatial resolution. In addition, cloud products including cloud optical thickness, cloud phase and effective droplet radius have been taken into account. A semi-automatic adjustment of the tracks within a search box was necessary to improve the tracking accuracy and thus the quality of the derived life-cycles. The combination of IR brightness temperatures, IR temperature time trends and satellite-based cloud products revealed different stages of storm development such as updraft intensification and glaciation well in most casesconfirming previously developed CI criteria from other studies. The vertical temperature gradient between 850 and 500 hPa, the Total-Totals-Index and the storm-relative helicity have been derived from ECMWF data and were used to characterize the storm synoptic environment. The results suggest that the storm-relative helicity also influences the life time of convective storms over Central Europe confirming previous studies. Tracking accuracy has shown to be a crucial issue in our study and a fully automated approach is required to enlarge the number of cases for significant statistics.

  11. Hybrid Automatic Building Interpretation System

    NASA Astrophysics Data System (ADS)

    Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.

    2011-09-01

    HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.

  12. Design of Provider-Provisioned Website Protection Scheme against Malware Distribution

    NASA Astrophysics Data System (ADS)

    Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka

    Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.

  13. a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.

    2018-04-01

    Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.

  14. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  15. Clumpy filaments of the Chamaeleon I cloud: C18O mapping with the SEST

    NASA Astrophysics Data System (ADS)

    Haikala, L. K.; Harju, J.; Mattila, K.; Toriseva, M.

    2005-02-01

    The Chamaeleon I dark cloud (Cha I) has been mapped in C18O with an angular resolution of 1 arcmin using the SEST telescope. The large scale structures previously observed with lower spatial resolution in the cloud turn into a network of clumpy filaments. The automatic Clumpfind routine developed by \\cite{williams1994} is used to identify individual clumps in a consistent way. Altogether 71 clumps were found and the total mass of these clumps is 230 M⊙. The dense ``cores'' detected with the NANTEN telescope (\\cite{mizuno1999}) and the very cold cores detected in the ISOPHOT serendipity survey (\\cite{toth2000}) form parts of these filaments but decompose into numerous ``clumps''. The filaments are preferentially oriented at right angles to the large-scale magnetic field in the region. We discuss the cloud structure, the physical characteristics of the clumps and the distribution of young stars. The observed clump mass spectrum is compared with the predictions of the turbulent fragmentation model of \\cite{padoan2002}. Agreement is found if fragmentation has been driven by very large-scale hypersonic turbulence, and if by now it has had time to dissipate into modestly supersonic turbulence in the interclump gas. According to numerical simulations, large-scale turbulence should have resulted in filamentary structures as seen in Cha I. The well-oriented magnetic field does not, however, support this picture, but suggests magnetically steered large-scale collapse. The origin of filaments and clumps in Cha I is thus controversial. A possible solution is that the characterization of the driving turbulence fails and that in fact different processes have been effective on small and large scales in this cloud. Based on observations collected at the European Southern Observatory, La Silla, Chile. FITS files are only available in electronic form at http://www.edpsciences.org

  16. VizieR Online Data Catalog: OGLE eclipsing binaries in LMC (Wyrzykowski+, 2003)

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M.; Zebrun, K.; Soszynski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.

    2003-09-01

    We present the catalog of 2580 eclipsing binary stars detected in 4.6 square degree area of the central parts of the Large Magellanic Cloud. The photometric data were collected during the second phase of the OGLE microlensing search from 1997 to 2000. The eclipsing objects were selected with the automatic search algorithm based on an artificial neural network. Basic statistics of eclipsing stars are presented. Also, the list of 36 candidates of detached eclipsing binaries for spectroscopic study and for precise LMC distance determination is provided. The full catalog is accessible from the OGLE Internet archive. (2 data files).

  17. Volcanic ash cloud detection from space: a preliminary comparison between RST approach and water vapour corrected BTD procedure

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Marchese, Francesco; Merucci, Luca; Pergola, Nicola; Corradini, Stefano; Tramutoli, Valerio

    2010-05-01

    Volcanic eruptions can inject large amounts (Tg) of gas and particles into the troposphere and, sometimes, into the stratosphere. Besides the main gases (H2O, CO2 , SO2 and HCl), volcanic clouds contain a mix of silicate ash particles in the size range 0.1μm to mm or larger. Interest in the ash presence detection is high in particular because it represents a serious hazard for air traffic. Particles with dimension of several millimeters can damage the aircraft structure (windows, wings, ailerons), while particles less than 10μm may be extremely dangerous for the jet engines and are undetectable by the pilots during night or in low visibility conditions. Satellite data are useful for measuring volcanic clouds because of the large vertical range of these emissions and their likely large horizontal spread. Moreover, since volcanoes are globally distributed and inherently dangerous, satellite measurements offer a practical and safe platform from which to make observations. Two different techniques used to detect volcanic clouds from satellite data are considered here for a preliminary comparison, with possible implications on quantitative retrievals of plume parameters. In particular, the Robust Satellite Techniques (RST) approach and a water vapour corrected version of the Brightness Temperature Difference (BTD) procedure, will be compared. The RST approach is based on the multi-temporal analysis of historical, long-term satellite records, devoted to a former characterization of the measured signal, in terms of expected value and natural variability and a further recognition of signal anomalies by an automatic, unsupervised change detection step. The BTD method is based on the difference between the brightness temperature measured in two channels centered around 11 and 12 mm. To take into account the atmospheric water vapour differential absorption in the 11-12 μm spectral range that tends to reduce (and in some cases completely mask) the BTD signal, a water vapor correction procedure, based on measured or synthetic atmospheric profiles, has been applied. Results independently achieved by both methods during recent Mt. Etna eruptions are presented, compared and discussed also in terms of further implications for quantitative retrievals of plume parameters.

  18. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  19. Evaluation of automatic cloud removal method for high elevation areas in Landsat 8 OLI images to improve environmental indexes computation

    NASA Astrophysics Data System (ADS)

    Alvarez, César I.; Teodoro, Ana; Tierra, Alfonso

    2017-10-01

    Thin clouds in the optical remote sensing data are frequent and in most of the cases don't allow to have a pure surface data in order to calculate some indexes as Normalized Difference Vegetation Index (NDVI). This paper aims to evaluate the Automatic Cloud Removal Method (ACRM) algorithm over a high elevation city like Quito (Ecuador), with an altitude of 2800 meters above sea level, where the clouds are presented all the year. The ACRM is an algorithm that considers a linear regression between each Landsat 8 OLI band and the Cirrus band using the slope obtained with the linear regression established. This algorithm was employed without any reference image or mask to try to remove the clouds. The results of the application of the ACRM algorithm over Quito didn't show a good performance. Therefore, was considered improving this algorithm using a different slope value data (ACMR Improved). After, the NDVI computation was compared with a reference NDVI MODIS data (MOD13Q1). The ACMR Improved algorithm had a successful result when compared with the original ACRM algorithm. In the future, this Improved ACRM algorithm needs to be tested in different regions of the world with different conditions to evaluate if the algorithm works successfully for all conditions.

  20. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  1. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  2. [The Key Technology Study on Cloud Computing Platform for ECG Monitoring Based on Regional Internet of Things].

    PubMed

    Yang, Shu; Qiu, Yuyan; Shi, Bo

    2016-09-01

    This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.

  3. VizieR Online Data Catalog: OGLE II SMC eclipsing binaries (Wyrzykowski+, 2004)

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M. K.; Zebrun, K.; Soszinski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.

    2009-03-01

    We present new version of the OGLE-II catalog of eclipsing binary stars detected in the Small Magellanic Cloud, based on Difference Image Analysis catalog of variable stars in the Magellanic Clouds containing data collected from 1997 to 2000. We found 1351 eclipsing binary stars in the central 2.4 square degree area of the SMC. 455 stars are newly discovered objects, not found in the previous release of the catalog. The eclipsing objects were selected with the automatic search algorithm based on the artificial neural network. The full catalog with individual photometry is accessible from the OGLE INTERNET archive, at ftp://sirius.astrouw.edu.pl/ogle/ogle2/var_stars/smc/ecl . Regular observations of the SMC fields started on June 26, 1997 and covered about 2.4 square degrees of central parts of the SMC. Reductions of the photometric data collected up to the end of May 2000 were performed with the Difference Image Analysis (DIA) package. (1 data file).

  4. Cloud Detection by Fusing Multi-Scale Convolutional Features

    NASA Astrophysics Data System (ADS)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  5. Determination of land use in Minnesota by automatic interpretation of ERTS MSS data

    NASA Technical Reports Server (NTRS)

    Zirkle, R. E.; Pile, D. R.

    1973-01-01

    This program aims to determine the feasibility of identifying land use in Minnesota by automatic interpretation of ERTS-MSS data. Ultimate objectives include establishment of land use delineation and quantification by computer processing with a minimum of human operator interaction. This implies not only that reflectivity as a function of calendar time can be catalogued effectively, but also that the effects of uncontrolled variables can be identified and compensated. Clouds are the major uncontrollable data pollutant, so part of the initial effort is devoted to determining their effect and the construction of a model to help correct or justifiably ignore affected data. Other short range objectives are to identify and verify measurements giving results of importance to land managers. Lake-counting is a prominent example. Open water is easily detected in band 7 data with some support from either band 4 or band 5 to remove ambiguities. Land managers and conservationists commission studies periodically to measure water bodies and total water count within specified areas.

  6. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  7. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  8. Characterization of AVHRR global cloud detection sensitivity based on CALIPSO-CALIOP cloud optical thickness information: demonstration of results based on the CM SAF CLARA-A2 climate data record

    NASA Astrophysics Data System (ADS)

    Karlsson, Karl-Göran; Håkansson, Nina

    2018-02-01

    The sensitivity in detecting thin clouds of the cloud screening method being used in the CM SAF cloud, albedo and surface radiation data set from AVHRR data (CLARA-A2) cloud climate data record (CDR) has been evaluated using cloud information from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) onboard the CALIPSO satellite. The sensitivity, including its global variation, has been studied based on collocations of Advanced Very High Resolution Radiometer (AVHRR) and CALIOP measurements over a 10-year period (2006-2015). The cloud detection sensitivity has been defined as the minimum cloud optical thickness for which 50 % of clouds could be detected, with the global average sensitivity estimated to be 0.225. After using this value to reduce the CALIOP cloud mask (i.e. clouds with optical thickness below this threshold were interpreted as cloud-free cases), cloudiness results were found to be basically unbiased over most of the globe except over the polar regions where a considerable underestimation of cloudiness could be seen during the polar winter. The overall probability of detecting clouds in the polar winter could be as low as 50 % over the highest and coldest parts of Greenland and Antarctica, showing that a large fraction of optically thick clouds also remains undetected here. The study included an in-depth analysis of the probability of detecting a cloud as a function of the vertically integrated cloud optical thickness as well as of the cloud's geographical position. Best results were achieved over oceanic surfaces at mid- to high latitudes where at least 50 % of all clouds with an optical thickness down to a value of 0.075 were detected. Corresponding cloud detection sensitivities over land surfaces outside of the polar regions were generally larger than 0.2 with maximum values of approximately 0.5 over the Sahara and the Arabian Peninsula. For polar land surfaces the values were close to 1 or higher with maximum values of 4.5 for the parts with the highest altitudes over Greenland and Antarctica. It is suggested to quantify the detection performance of other CDRs in terms of a sensitivity threshold of cloud optical thickness, which can be estimated using active lidar observations. Validation results are proposed to be used in Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulation Package (COSP) simulators for cloud detection characterization of various cloud CDRs from passive imagery.

  9. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  10. Automatic co-registration of 3D multi-sensor point clouds

    NASA Astrophysics Data System (ADS)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  11. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x., showed the error, 3.5% and 5.0%,.respectively The reference height was assumed as the measurement performed by the tape on the cut tree. The average error of automatic determination of the tree height by the algorithm GNOM based on the TLS point clouds amounted to 6.3% and was slightly higher than when using the manual method of measurements on profiles in the TerraScan (Terrasolid; the error of 5.6%). The relatively high value of the error may be mainly related to the small number of points TLS in the upper parts of crowns. The crown height measurement showed the error of +9.5%. The reference in this case was the tape measurement performed already on the trunks of cut pine trees. Processing the clouds of points by the algorithms GNOM for 16 analyzed trees took no longer than 10 min. (37 sec. /tree). The paper mainly showed the TLS measurement innovation and its high precision in acquiring biometric data in forestry, and at the same time also the further need to increase the degree of automation of processing the clouds of points 3D from terrestrial laser scanning.

  12. An assessment of thin cloud detection by applying bidirectional reflectance distribution function model-based background surface reflectance using Geostationary Ocean Color Imager (GOCI): A case study for South Korea

    NASA Astrophysics Data System (ADS)

    Kim, Hye-Won; Yeom, Jong-Min; Shin, Daegeun; Choi, Sungwon; Han, Kyung-Soo; Roujean, Jean-Louis

    2017-08-01

    In this study, a new assessment of thin cloud detection with the application of bidirectional reflectance distribution function (BRDF) model-based background surface reflectance was undertaken by interpreting surface spectra characterized using the Geostationary Ocean Color Imager (GOCI) over a land surface area. Unlike cloud detection over the ocean, the detection of cloud over land surfaces is difficult due to the complicated surface scattering characteristics, which vary among land surface types. Furthermore, in the case of thin clouds, in which the surface and cloud radiation are mixed, it is difficult to detect the clouds in both land and atmospheric fields. Therefore, to interpret background surface reflectance, especially underneath cloud, the semiempirical BRDF model was used to simulate surface reflectance by reflecting solar angle-dependent geostationary sensor geometry. For quantitative validation, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data were used to make a comparison with the proposed cloud masking result. As a result, the new cloud masking scheme resulted in a high probability of detection (POD = 0.82) compared with the Moderate Resolution Imaging Spectroradiometer (MODIS) (POD = 0.808) for all cloud cases. In particular, the agreement between the CALIPSO cloud product and new GOCI cloud mask was over 94% when detecting thin cloud (e.g., altostratus and cirrus) from January 2014 to June 2015. This result is relatively high in comparison with the result from the MODIS Collection 6 cloud mask product (MYD35).

  13. RECOVER: An Automated Cloud-Based Decision Support System for Post-fire Rehabilitation Planning

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Carroll, Mark; Weber, K. T.; Brown, Molly E.; Gill, Roger L.; Wooten, Margaret; May J.; Serr, K.; Smith, E.; Goldsby, R.; hide

    2014-01-01

    RECOVER is a site-specific decision support system that automatically brings together in a single analysis environment the information necessary for post-fire rehabilitation decision-making. After a major wildfire, law requires that the federal land management agencies certify a comprehensive plan for public safety, burned area stabilization, resource protection, and site recovery. These burned area emergency response (BAER) plans are a crucial part of our national response to wildfire disasters and depend heavily on data acquired from a variety of sources. Final plans are due within 21 days of control of a major wildfire and become the guiding document for managing the activities and budgets for all subsequent remediation efforts. There are few instances in the federal government where plans of such wide-ranging scope and importance are assembled on such short notice and translated into action more quickly. RECOVER has been designed in close collaboration with our agency partners and directly addresses their high-priority decision-making requirements. In response to a fire detection event, RECOVER uses the rapid resource allocation capabilities of cloud computing to automatically collect Earth observational data, derived decision products, and historic biophysical data so that when the fire is contained, BAER teams will have a complete and ready-to-use RECOVER dataset and GIS analysis environment customized for the target wildfire. Initial studies suggest that RECOVER can transform this information-intensive process by reducing from days to a matter of minutes the time required to assemble and deliver crucial wildfire-related data.

  14. RECOVER: An Automated, Cloud-Based Decision Support System for Post-Fire Rehabilitation Planning

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Carroll, M. L.; Weber, K. T.; Brown, M. E.; Gill, R. L.; Wooten, M.; May, J.; Serr, K.; Smith, E.; Goldsby, R.; Newtoff, K.; Bradford, K.; Doyle, C.; Volker, E.; Weber, S.

    2014-11-01

    RECOVER is a site-specific decision support system that automatically brings together in a single analysis environment the information necessary for post-fire rehabilitation decision-making. After a major wildfire, law requires that the federal land management agencies certify a comprehensive plan for public safety, burned area stabilization, resource protection, and site recovery. These burned area emergency response (BAER) plans are a crucial part of our national response to wildfire disasters and depend heavily on data acquired from a variety of sources. Final plans are due within 21 days of control of a major wildfire and become the guiding document for managing the activities and budgets for all subsequent remediation efforts. There are few instances in the federal government where plans of such wide-ranging scope and importance are assembled on such short notice and translated into action more quickly. RECOVER has been designed in close collaboration with our agency partners and directly addresses their high-priority decision-making requirements. In response to a fire detection event, RECOVER uses the rapid resource allocation capabilities of cloud computing to automatically collect Earth observational data, derived decision products, and historic biophysical data so that when the fire is contained, BAER teams will have a complete and ready-to-use RECOVER dataset and GIS analysis environment customized for the target wildfire. Initial studies suggest that RECOVER can transform this information-intensive process by reducing from days to a matter of minutes the time required to assemble and deliver crucial wildfire-related data.

  15. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  16. Validity of association rules extracted by healthcare-data-mining.

    PubMed

    Takeuchi, Hiroshi; Kodama, Naoki

    2014-01-01

    A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning the user's lifestyle and health condition embedded in their personal big data, by using healthcare-data-mining. This study has verified that the extracted rules on the basis of a daily time-series data stored during a half- year by volunteer users of this system are valid.

  17. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  18. VizieR Online Data Catalog: Star clusters automatically detected in the LMC (Bitsakis+, 2017)

    NASA Astrophysics Data System (ADS)

    Bitsakis, T.; Bonfini, P.; Gonzalez-Lopezlira, R. A.; Ramirez-Siordia, V. H.; Bruzual, G.; Charlot, S.; Maravelias, G.; Zaritsky, D.

    2018-03-01

    The archival data used in this work were acquired from several diverse large surveys, which mapped the Magellanic Clouds at various bands. Simons+ (2014AdSpR..53..939S) composed a mosaic using archival data from the Galaxy Evolution Explorer (GALEX) at the near-ultraviolet (NUV) band (λeff=2275Å). The mosaic covers an area of 15deg2 on the LMC. the central ~3x1deg2 of the LMC (the bar-region) was later observed by the Swift Ultraviolet-Optical Telescope (UVOT) Magellanic Clouds Survey (SUMAC; Siegel+ 2014AJ....148..131S). The optical data used here are from the Magellanic Cloud Photometric Survey (MCPS; Zaritsky+ 2004, J/AJ/128/1606). These authors observed the central 64deg2 of the LMC with 3.8-5.2 minute exposures at the Johnson U, B, V, and Gunn i filters of the Las Campanas Swope Telescope. Meixner+ (2006, J/AJ/132/2268) performed a uniform and unbiased imaging survey of the LMC (called Surveying the Agents of a Galaxy's Evolution, or SAGE), covering the central 7deg2 with both the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer (MIPS) on-board the Spitzer Space Telescope. (1 data file).

  19. Towards a comprehensive knowledge of the star cluster population in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Piatti, A. E.

    2018-07-01

    The Small Magellanic Cloud (SMC) has recently been found to harbour an increase of more than 200 per cent in its known cluster population. Here, we provide solid evidence that this unprecedented number of clusters could be greatly overestimated. On the one hand, the fully automatic procedure used to identify such an enormous cluster candidate sample did not recover ˜50 per cent, on average, of the known relatively bright clusters located in the SMC main body. On the other hand, the number of new cluster candidates per time unit as a function of time is noticeably different from the intrinsic SMC cluster frequency (CF), which should not be the case if these new detections were genuine physical systems. We found additionally that the SMC CF varies spatially, in such a way that it resembles an outside-in process coupled with the effects of a relatively recent interaction with the Large Magellanic Cloud. By assuming that clusters and field stars share the same formation history, we showed for the first time that the cluster dissolution rate also depends on position in the galaxy. The cluster dissolution becomes higher as the concentration of galaxy mass increases or if external tidal forces are present.

  20. Comparison of Cloud and Aerosol Detection between CERES Edition 3 Cloud Mask and CALIPSO Version 2 Data Products

    NASA Astrophysics Data System (ADS)

    Trepte, Qing; Minnis, Patrick; Sun-Mack, Sunny; Trepte, Charles

    Clouds and aerosol play important roles in the global climate system. Accurately detecting their presence, altitude, and properties using satellite radiance measurements is a crucial first step in determining their influence on surface and top-of-atmosphere radiative fluxes. This paper presents a comparison analysis of a new version of the Clouds and Earth's Radiant Energy System (CERES) Edition 3 cloud detection algorithms using Aqua MODIS data with the recently released Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Version 2 Vertical Feature Mask (VFM). Improvements in CERES Edition 3 cloud mask include dust detection, thin cirrus tests, enhanced low cloud detection at night, and a smoother transition from mid-latitude to polar regions. For the CALIPSO Version 2 data set, changes to the lidar calibration can result in significant improvements to its identification of optically thick aerosol layers. The Aqua and CALIPSO satellites, part of the A-train satellite constellation, provide a unique opportunity for validating passive sensor cloud and aerosol detection using an active sensor. In this paper, individual comparison cases will be discussed for different types of clouds and aerosols over various surfaces, for daytime and nighttime conditions, and for regions ranging from the tropics to the poles. Examples will include an assessment of the CERES detection algorithm for optically thin cirrus, marine stratus, and polar night clouds as well as its ability to characterize Saharan dust plumes off the African coast. With the CALIPSO lidar's unique ability to probe the vertical structure of clouds and aerosol layers, it provides an excellent validation data set for cloud detection algorithms, especially for polar nighttime clouds.

  1. Remote sensing-based detection and quantification of roadway debris following natural disasters

    NASA Astrophysics Data System (ADS)

    Axel, Colin; van Aardt, Jan A. N.; Aros-Vera, Felipe; Holguín-Veras, José

    2016-05-01

    Rapid knowledge of road network conditions is vital to formulate an efficient emergency response plan following any major disaster. Fallen buildings, immobile vehicles, and other forms of debris often render roads impassable to responders. The status of roadways is generally determined through time and resource heavy methods, such as field surveys and manual interpretation of remotely sensed imagery. Airborne lidar systems provide an alternative, cost-effective option for performing network assessments. The 3D data can be collected quickly over a wide area and provide valuable insight about the geometry and structure of the scene. This paper presents a method for automatically detecting and characterizing debris in roadways using airborne lidar data. Points falling within the road extent are extracted from the point cloud and clustered into individual objects using region growing. Objects are classified as debris or non-debris using surface properties and contextual cues. Debris piles are reconstructed as surfaces using alpha shapes, from which an estimate of debris volume can be computed. Results using real lidar data collected after a natural disaster are presented. Initial results indicate that accurate debris maps can be automatically generated using the proposed method. These debris maps would be an invaluable asset to disaster management and emergency response teams attempting to reach survivors despite a crippled transportation network.

  2. Permanent 3D laser scanning system for an active landslide in Gresten (Austria)

    NASA Astrophysics Data System (ADS)

    Canli, Ekrem; Höfle, Bernhard; Hämmerle, Martin; Benni, Thiebes; Glade, Thomas

    2015-04-01

    Terrestrial laser scanners (TLS) have widely been used for high spatial resolution data acquisition of topographic features and geomorphic analyses. Existing applications encompass different landslides including rockfall, translational or rotational landslides, debris flow, but also coastal cliff erosion, braided river evolution or river bank erosion. The main advantages of TLS are (a) the high spatial sampling density of XYZ-measurements (e.g. 1 point every 2-3 mm at 10 m distance), particularly in comparison with the low data density monitoring techniques such as GNSS or total stations, (b) the millimeter accuracy and precision of the range measurement to centimeter accuracy of the final DEM, and (c) the highly dense area-wide scanning that enables to look through vegetation and to measure bare ground. One of its main constraints is the temporal resolution of acquired data due to labor costs and time requirements for field campaigns. Thus, repetition measurements are generally performed only episodically. However, for an increased scientific understanding of the processes as well as for early warning purposes, we present a novel permanent 3D monitoring setup to increase the temporal resolution of TLS measurements. This accounts for different potential monitoring deliverables such as volumetric calculations, spatio-temporal movement patterns, predictions and even alerting. This system was installed at the active Salcher landslide in Gresten (Austria) that is situated in the transition zone of the Gresten Klippenbelt (Helvetic) and the Flyschzone (Penninic). The characteristic lithofacies are the Gresten Beds of Early Jurassic age that are covered by a sequence of marly and silty beds with intercalated sandy limestones. Permanent data acquisition can be implemented into our workflow with any long-range TLS system offering fully automated capturing. We utilize an Optech ILRIS-3D scanner. The time interval between two scans is currently set to 24 hours, but can be set as low as a full scan requires. The field of view (FoV) from the fixed scanner position covers most of the active landslide surface (with a maximum distance of 300 m). To initiate the scan acquisition, command line tools are run automatically on an attached notebook computer in the given time interval. The acquired 3D point cloud (including signal intensity recordings) are then sent to a server via automatic internet transfer. Each new point cloud is automatically compared with an initial 'zero' survey. Furthermore, highly detailed reference surveys are performed several times per year with the most recent Riegl VZ-6000 scanner from multiple scan positions in order to provide high quality independent ground truth. The change detection is carried out by fully automatic batch processing without the need for manual interaction. One of the applied change detection approaches is the M3C2 algorithm (Multiscale Model to Model Cloud Comparison) which is available as open source software. The field site in Gresten also contains different other monitoring systems such as inclinometers and piezometers that complement in the interpretation of the obtained TLS data. Future analysis will include the combination of surface movement with subsurface hydrology as well as with climatic data obtained from an on-site climatic station.

  3. Analysis of the VIIRS cloud mask, comparison with the NAVOCEANO cloud mask, and how they complement each other

    NASA Astrophysics Data System (ADS)

    Cayula, Jean-François P.; May, Douglas A.; McKenzie, Bruce D.

    2014-05-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Mask (VCM) Intermediate Product (IP) has been developed for use with Suomi National Polar-orbiting Partnership (NPP) VIIRS Environmental Data Record (EDR) products. In particular, the VIIRS Sea Surface Temperature (SST) EDR relies on VCM to identify cloud contaminated observations. Unfortunately, VCM does not appear to perform as well as cloud detection algorithms for SST. This may be due to similar but different goals of the two algorithms. VCM is concerned with detecting clouds while SST is interested in identifying clear observations. The result is that in undetermined cases VCM defaults to "clear," while the SST cloud detection defaults to "cloud." This problem is further compounded because classic SST cloud detection often flags as "cloud" all types of corrupted data, thus making a comparison with VCM difficult. The Naval Oceanographic Office (NAVOCEANO), which operationally produces a VIIRS SST product, relies on cloud detection from the NAVOCEANO Cloud Mask (NCM), adapted from cloud detection schemes designed for SST processing. To analyze VCM, the NAVOCEANO SST process was modified to attach the VCM flags to all SST retrievals. Global statistics are computed for both day and night data. The cases where NCM and/or VCM tag data as cloud-contaminated or clear can then be investigated. By analyzing the VCM individual test flags in conjunction with the status of NCM, areas where VCM can complement NCM are identified.

  4. A cloud computing based platform for sleep behavior and chronic diseases collaborative research.

    PubMed

    Kuo, Mu-Hsing; Borycki, Elizabeth; Kushniruk, Andre; Huang, Yueh-Min; Hung, Shu-Hui

    2014-01-01

    The objective of this study is to propose a Cloud Computing based platform for sleep behavior and chronic disease collaborative research. The platform consists of two main components: (1) a sensing bed sheet with textile sensors to automatically record patient's sleep behaviors and vital signs, and (2) a service-oriented cloud computing architecture (SOCCA) that provides a data repository and allows for sharing and analysis of collected data. Also, we describe our systematic approach to implementing the SOCCA. We believe that the new cloud-based platform can provide nurse and other health professional researchers located in differing geographic locations with a cost effective, flexible, secure and privacy-preserved research environment.

  5. Overview of MPLNET Version 3 Cloud Detection

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip

    2016-01-01

    The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.

  6. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  7. Environmental Support to Space Launch

    DTIC Science & Technology

    2006-05-31

    in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s approval or disapproval of...in this study as there were no occurrences. Tomado/Waterapout 0 999 5! FWinds Wath er nots (Convective) (MR** from Sit) Winds GTE 60 Knots (Convective...and Merceret (2004) developed an automatic process to determine cloud boundaries using cloud physics and ground-based radar data. It performs an

  8. GOES Cloud Detection at the Global Hydrology and Climate Center

    NASA Technical Reports Server (NTRS)

    Laws, Kevin; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)

    2002-01-01

    The bi-spectral threshold (BTH) for cloud detection and height assignment is now operational at NASA's Global Hydrology and Climate Center (GHCC). This new approach is similar in principle to the bi-spectral spatial coherence (BSC) method with improvements made to produce a more robust cloud-filtering algorithm for nighttime cloud detection and subsequent 24-hour operational cloud top pressure assignment. The method capitalizes on cloud and surface emissivity differences from the GOES 3.9 and 10.7-micrometer channels to distinguish cloudy from clear pixels. Separate threshold values are determined for day and nighttime detection, and applied to a 20-day minimum composite difference image to better filter background effects and enhance differences in cloud properties. A cloud top pressure is assigned to each cloudy pixel by referencing the 10.7-micrometer channel temperature to a thermodynamic profile from a locally -run regional forecast model. This paper and supplemental poster will present an objective validation of nighttime cloud detection by the BTH approach in comparison with previous methods. The cloud top pressure will be evaluated by comparing to the NESDIS operational CO2 slicing approach.

  9. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  10. Semantic Segmentation of Indoor Point Clouds Using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Babacan, K.; Chen, L.; Sohn, G.

    2017-11-01

    As Building Information Modelling (BIM) thrives, geometry becomes no longer sufficient; an ever increasing variety of semantic information is needed to express an indoor model adequately. On the other hand, for the existing buildings, automatically generating semantically enriched BIM from point cloud data is in its infancy. The previous research to enhance the semantic content rely on frameworks in which some specific rules and/or features that are hand coded by specialists. These methods immanently lack generalization and easily break in different circumstances. On this account, a generalized framework is urgently needed to automatically and accurately generate semantic information. Therefore we propose to employ deep learning techniques for the semantic segmentation of point clouds into meaningful parts. More specifically, we build a volumetric data representation in order to efficiently generate the high number of training samples needed to initiate a convolutional neural network architecture. The feedforward propagation is used in such a way to perform the classification in voxel level for achieving semantic segmentation. The method is tested both for a mobile laser scanner point cloud, and a larger scale synthetically generated data. We also demonstrate a case study, in which our method can be effectively used to leverage the extraction of planar surfaces in challenging cluttered indoor environments.

  11. Testing continuous earthquake detection and location in Alentejo (South Portugal) by waveform coherency analysis

    NASA Astrophysics Data System (ADS)

    Matos, Catarina; Grigoli, Francesco; Cesca, Simone; Custódio, Susana

    2015-04-01

    In the last decade a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered Portugal. This extraordinary network coverage enables now the computation of a high-resolution image of the seismicity of Portugal, which in turn will shed light on the seismotectonics of Portugal. The large data volumes available cannot be analyzed by traditional time-consuming manual location procedures. In this presentation we show first results on the automatic detection and location of earthquakes occurred in a selected region in the south of Portugal Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e., lowering the detection threshold). We present a modified version of the automatic seismic event location by waveform coherency analysis developed by Grigoli et al. (2013, 2014), designed to perform earthquake detections and locations in continuous data. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace, while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event detection and location is obtained by performing waveform coherence analysis scanning different hypocentral coordinates. We apply this technique to earthquakes in the Alentejo region (South Portugal), taking advantage from a small aperture seismic network installed in the south of Portugal for two years (2010 - 2011) during the DOCTAR experiment. In addition to the good network coverage, the Alentejo region was chosen for its simple tectonic setting and also because the relationship between seismicity, tectonics and local lithospheric structure is intriguing and still poorly understood. Inside the target area the seismicity clusters mainly within two clouds, oriented SE-NW and SW-NE. Should these clusters be seen as the expression of local active faults? Are they associated to lithological transitions? Or do the locations obtained from the previously sparse permanent network have large errors and generate fake clusters? We present preliminary results from this study, and compare them with manual locations. This work is supported by project QuakeLoc, reference: PTDC/GEO-FIQ/3522/2012

  12. Using SPOT–5 HRG Data in Panchromatic Mode for Operational Detection of Small Ships in Tropical Area

    PubMed Central

    Corbane, Christina; Marre, Fabrice; Petit, Michel

    2008-01-01

    Nowadays, there is a growing interest in applications of space remote sensing systems for maritime surveillance which includes among others traffic surveillance, maritime security, illegal fisheries survey, oil discharge and sea pollution monitoring. Within the framework of several French and European projects, an algorithm for automatic ship detection from SPOT–5 HRG data was developed to complement existing fishery control measures, in particular the Vessel Monitoring System. The algorithm focused on feature–based analysis of satellite imagery. Genetic algorithms and Neural Networks were used to deal with the feature–borne information. Based on the described approach, a first prototype was designed to classify small targets such as shrimp boats and tested on panchromatic SPOT–5, 5–m resolution product taking into account the environmental and fishing context. The ability to detect shrimp boats with satisfactory detection rates is an indicator of the robustness of the algorithm. Still, the benchmark revealed problems related to increased false alarm rates on particular types of images with a high percentage of cloud cover and a sea cluttered background. PMID:27879859

  13. Data and knowledge in medical distributed applications.

    PubMed

    Serban, Alexandru; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2014-01-01

    Building a clinical decision support system (CDSS) capable to collect process and diagnose data from the patients automatically, based on information, symptoms and investigations is one of the current challenges for researchers and medical science. The purpose of the current study is to design a cloud-based CDSS to improve patient safety, quality of care and organizational efficiency. It presents the design of a cloud-based application system using a medical based approach, which covers different diseases to diagnosis, differentiated on most important pathologies. Using online questionnaires, traditional and new data will be collected from patients. After data input, the application will formulate a presumptive diagnosis and will direct patients to the correspondent department. A questionnaire will dynamically ask questions about the interface, and functionality improvements. Based on the answers, the functionality of the system and the user interface will be improved considering the real needs expressed by the end-users. The cloud-based CDSS, as a useful tool for patients, physicians and healthcare providers involves the computer support in the diagnosis of different pathologies and an accurate automatic differential diagnostic system.

  14. Modeling the Virtual Machine Launching Overhead under Fermicloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele; Wu, Hao; Ren, Shangping

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resourcemore » (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.« less

  15. Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory

    NASA Astrophysics Data System (ADS)

    Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.

    2017-09-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  16. Volcanic ash and meteorological clouds detection by neural networks

    NASA Astrophysics Data System (ADS)

    Picchiani, Matteo; Del Frate, Fabio; Stefano, Corradini; Piscini, Alessandro; Merucci, Luca; Chini, Marco

    2014-05-01

    The recent eruptions of the Icelandic Eyjafjallajokull and Grímsvötn volcanoes occurred in 2010 and 2011 respectively have been highlighted the necessity to increase the accuracy of the ash detection and retrieval. Follow the evolution of the ash plume is crucial for aviation security. Indeed from the accuracy of the algorithms applied to identify the ash presence may depend the safety of the passengers. The difference between the brightness temperatures (BTD) of thermal infrared channels, centered around 11 µm and 12 µm, is suitable to distinguish the ash plume from the meteorological clouds [Prata, 1989] on satellite images. Anyway in some condition an accurate interpretation is essential to avoid false alarms. In particular Corradini et al. (2008) have developed a correction procedure aimed to avoid the atmospheric water vapour effect that tends to mask, or cancel-out, the ash plume effects on the BTD. Another relevant issue is due to the height of the meteorological clouds since their brightness temperatures is affected by this parameter. Moreover the overlapping of ash plume and meteorological clouds may affects the retrieval result since this latter is dependent by the physical temperature of the surface below the ash cloud. For this reason the correct identification of such condition, that can require a proper interpretation by the analyst, is crucial to address properly the inversion of ash parameters. In this work a fast and automatic procedure based on multispectral data from MODIS and a neural network algorithm is applied to the recent eruptions of Eyjafjallajokull and Grímsvötn volcanoes. A similar approach has been already tested with encouraging results in a previous work [Picchiani et al., 2011]. The algorithm is now improved in order to distinguish the meteorological clouds from the ash plume, dividing the latter between ash above sea and ash overlapped to meteorological clouds. The results have been compared to the BTD ones, properly interpreted considering the information of the visible and infrared channels. The comparison shows that the proposed methodology achieves very promising performances, indeed an overall accuracy greater than 87% can be iteratively obtained classifying new images without human interactions. References: Corradini, S., Spinetti, C., Carboni, E., Tirelli, C., Buongiorno, M. F., Pugnaghi, S., and Gangale, G..; "Mt. Etna tropospheric ash retrieval and sensitivity analysis using Moderate Resolution Imaging Spectroradiometer measurements". J, Atmosph. Rem. Sens., 2, 023550, DOI:10.1117/12.823215, 2008. Prata A. J., "Infrared radiative transfer calculations for volcanic ash clouds", Geophys. Res. Lett., Vol. 16, No. 11, pp. 1293-1296, 1989. Picchiani, M., Chini, M., Corradini, S., Merucci, L., Sellitto, P., Del Frate, F. and Stramondo, S., "Volcanic ash detection and retrievals from MODIS data by means of Neural Networks", Atmos. Meas. Tech., 4, 2619-2631, doi:10.5194/amt-4-2619-2011, 2011.

  17. Automatically Determining Scale Within Unstructured Point Clouds

    NASA Astrophysics Data System (ADS)

    Kadamen, Jayren; Sithole, George

    2016-06-01

    Three dimensional models obtained from imagery have an arbitrary scale and therefore have to be scaled. Automatically scaling these models requires the detection of objects in these models which can be computationally intensive. Real-time object detection may pose problems for applications such as indoor navigation. This investigation poses the idea that relational cues, specifically height ratios, within indoor environments may offer an easier means to obtain scales for models created using imagery. The investigation aimed to show two things, (a) that the size of objects, especially the height off ground is consistent within an environment, and (b) that based on this consistency, objects can be identified and their general size used to scale a model. To test the idea a hypothesis is first tested on a terrestrial lidar scan of an indoor environment. Later as a proof of concept the same test is applied to a model created using imagery. The most notable finding was that the detection of objects can be more readily done by studying the ratio between the dimensions of objects that have their dimensions defined by human physiology. For example the dimensions of desks and chairs are related to the height of an average person. In the test, the difference between generalised and actual dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm) was observed from automated scaling. By analysing the ratio between the heights (distance from the floor) of the tops of objects in a room, identification was also achieved.

  18. Using Information From Prior Satellite Scans to Improve Cloud Detection Near the Day-Night Terminator

    NASA Technical Reports Server (NTRS)

    Yost, Christopher R.; Minnis, Patrick; Trepte, Qing Z.; Palikonda, Rabindra; Ayers, Jeffrey K.; Spangenberg, Doulas A.

    2012-01-01

    With geostationary satellite data it is possible to have a continuous record of diurnal cycles of cloud properties for a large portion of the globe. Daytime cloud property retrieval algorithms are typically superior to nighttime algorithms because daytime methods utilize measurements of reflected solar radiation. However, reflected solar radiation is difficult to accurately model for high solar zenith angles where the amount of incident radiation is small. Clear and cloudy scenes can exhibit very small differences in reflected radiation and threshold-based cloud detection methods have more difficulty setting the proper thresholds for accurate cloud detection. Because top-of-atmosphere radiances are typically more accurately modeled outside the terminator region, information from previous scans can help guide cloud detection near the terminator. This paper presents an algorithm that uses cloud fraction and clear and cloudy infrared brightness temperatures from previous satellite scan times to improve the performance of a threshold-based cloud mask near the terminator. Comparisons of daytime, nighttime, and terminator cloud fraction derived from Geostationary Operational Environmental Satellite (GOES) radiance measurements show that the algorithm greatly reduces the number of false cloud detections and smoothes the transition from the daytime to the nighttime clod detection algorithm. Comparisons with the Geoscience Laser Altimeter System (GLAS) data show that using this algorithm decreases the number of false detections by approximately 20 percentage points.

  19. Global cloud top height retrieval using SCIAMACHY limb spectra: model studies and first results

    NASA Astrophysics Data System (ADS)

    Eichmann, Kai-Uwe; Lelli, Luca; von Savigny, Christian; Sembhi, Harjinder; Burrows, John P.

    2016-03-01

    Cloud top heights (CTHs) are retrieved for the period 1 January 2003 to 7 April 2012 using height-resolved limb spectra measured with the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) on board ENVISAT (ENVIronmental SATellite). In this study, we present the retrieval code SCODA (SCIAMACHY cloud detection algorithm) based on a colour index method and test the accuracy of the retrieved CTHs in comparison to other methods. Sensitivity studies using the radiative transfer model SCIATRAN show that the method is capable of detecting cloud tops down to about 5 km and very thin cirrus clouds up to the tropopause. Volcanic particles can be detected that occasionally reach the lower stratosphere. Upper tropospheric ice clouds are observable for a nadir cloud optical thickness (COT) ≥ 0.01, which is in the subvisual range. This detection sensitivity decreases towards the lowermost troposphere. The COT detection limit for a water cloud top height of 5 km is roughly 0.1. This value is much lower than thresholds reported for passive cloud detection methods in nadir-viewing direction. Low clouds at 2 to 3 km can only be retrieved under very clean atmospheric conditions, as light scattering of aerosol particles interferes with the cloud particle scattering. We compare co-located SCIAMACHY limb and nadir cloud parameters that are retrieved with the Semi-Analytical CloUd Retrieval Algorithm (SACURA). Only opaque clouds (τN,c > 5) are detected with the nadir passive retrieval technique in the UV-visible and infrared wavelength ranges. Thus, due to the frequent occurrence of thin clouds and subvisual cirrus clouds in the tropics, larger CTH deviations are detected between both viewing geometries. Zonal mean CTH differences can be as high as 4 km in the tropics. The agreement in global cloud fields is sufficiently good. However, the land-sea contrast, as seen in nadir cloud occurrence frequency distributions, is not observed in limb geometry. Co-located cloud top height measurements of the limb-viewing Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on ENVISAT are compared for the period from January 2008 to March 2012. The global CTH agreement of about 1 km is observed, which is smaller than the vertical field of view of both instruments. Lower stratospheric aerosols from volcanic eruptions occasionally interfere with the cloud retrieval and inhibit the detection of tropospheric clouds. The aerosol impact on cloud retrievals was studied for the volcanoes Kasatochi (August 2008), Sarychev Peak (June 2009), and Nabro (June 2011). Long-lasting aerosol scattering is detected after these events in the Northern Hemisphere for heights above 12.5 km in tropical and polar latitudes. Aerosol top heights up to about 22 km are found in 2009 and the enhanced lower stratospheric aerosol layer persisted for about 7 months. In August 2009 about 82 % of the lower stratosphere between 30 and 70° N was filled with scattering particles and nearly 50 % in October 2008.

  20. Computational analysis of PET by AIBL (CapAIBL): a cloud-based processing pipeline for the quantification of PET images

    NASA Astrophysics Data System (ADS)

    Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier

    2015-03-01

    With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.

  1. Instruments and Methodologies for the Underwater Tridimensional Digitization and Data Musealization

    NASA Astrophysics Data System (ADS)

    Repola, L.; Memmolo, R.; Signoretti, D.

    2015-04-01

    In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.

  2. Building damage assessment using airborne lidar

    NASA Astrophysics Data System (ADS)

    Axel, Colin; van Aardt, Jan

    2017-10-01

    The assessment of building damage following a natural disaster is a crucial step in determining the impact of the event itself and gauging reconstruction needs. Automatic methods for deriving damage maps from remotely sensed data are preferred, since they are regarded as being rapid and objective. We propose an algorithm for performing unsupervised building segmentation and damage assessment using airborne light detection and ranging (lidar) data. Local surface properties, including normal vectors and curvature, were used along with region growing to segment individual buildings in lidar point clouds. Damaged building candidates were identified based on rooftop inclination angle, and then damage was assessed using planarity and point height metrics. Validation of the building segmentation and damage assessment techniques were performed using airborne lidar data collected after the Haiti earthquake of 2010. Building segmentation and damage assessment accuracies of 93.8% and 78.9%, respectively, were obtained using lidar point clouds and expert damage assessments of 1953 buildings in heavily damaged regions. We believe this research presents an indication of the utility of airborne lidar remote sensing for increasing the efficiency and speed at which emergency response operations are performed.

  3. Through thick and thin: quantitative classification of photometric observing conditions on Paranal

    NASA Astrophysics Data System (ADS)

    Kerber, Florian; Querel, Richard R.; Neureiter, Bianca; Hanuschik, Reinhard

    2016-07-01

    A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer is used to monitor sky conditions over ESO's Paranal observatory. It provides measurements of precipitable water vapour (PWV) at 183 GHz, which are being used in Service Mode for scheduling observations that can take advantage of favourable conditions for infrared (IR) observations. The instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. It is capable of detecting cold and thin, even sub-visual, cirrus clouds. We present a diagnostic diagram that, based on a sophisticated time series analysis of these IR sky brightness data, allows for the automatic and quantitative classification of photometric observing conditions over Paranal. The method is highly sensitive to the presence of even very thin clouds but robust against other causes of sky brightness variations. The diagram has been validated across the complete range of conditions that occur over Paranal and we find that the automated process provides correct classification at the 95% level. We plan to develop our method into an operational tool for routine use in support of ESO Science Operations.

  4. Cloud Overlapping Detection Algorithm Using Solar and IR Wavelengths With GOSE Data Over ARM/SGP Site

    NASA Technical Reports Server (NTRS)

    Kawamoto, Kazuaki; Minnis, Patrick; Smith, William L., Jr.

    2001-01-01

    One of the most perplexing problems in satellite cloud remote sensing is the overlapping of cloud layers. Although most techniques assume a 1-layer cloud system in a given retrieval of cloud properties, many observations are affected by radiation from more than one cloud layer. As such, cloud overlap can cause errors in the retrieval of many properties including cloud height, optical depth, phase, and particle size. A variety of methods have been developed to identify overlapped clouds in a given satellite imager pixel. Baum el al. (1995) used CO2 slicing and a spatial coherence method to demonstrate a possible analysis method for nighttime detection of multilayered clouds. Jin and Rossow (1997) also used a multispectral CO2 slicing technique for a global analysis of overlapped cloud amount. Lin et al. (1999) used a combination infrared, visible, and microwave data to detect overlapped clouds over water. Recently, Baum and Spinhirne (2000) proposed 1.6 and 11 microns. bispectral threshold method. While all of these methods have made progress in solving this stubborn problem, none have yet proven satisfactory for continuous and consistent monitoring of multilayer cloud systems. It is clear that detection of overlapping clouds from passive instruments such as satellite radiometers is in an immature stage of development and requires additional research. Overlapped cloud systems also affect the retrievals of cloud properties over the ARM domains (e.g., Minnis et al 1998) and hence should identified as accurately as possible. To reach this goal, it is necessary to determine which information can be exploited for detecting multilayered clouds from operational meteorological satellite data used by ARM. This paper examines the potential information available in spectral data available on the Geostationary Operational Environmental Satellite (GOES) imager and the NOAA Advanced Very High Resolution Radiometer (AVHRR) used over the ARM SGP and NSA sites to study the capability of detecting overlapping clouds

  5. Cloud Overlapping Detection Algorithm Using Solar and IR Wavelengths with GOES Data Over ARM/SGP Site

    NASA Technical Reports Server (NTRS)

    Kawamoto, K.; Minnis, P.; Smith, W. L., Jr.

    2001-01-01

    One of the most perplexing problems in satellite cloud remote sensing is the overlapping of cloud layers. Although most techniques assume a one layer cloud system in a given retrieval of cloud properties, many observations are affected by radiation from more than one cloud layer. As such, cloud overlap can cause errors in the retrieval of many properties including cloud height, optical depth, phase, and particle size. A variety of methods have been developed to identify overlapped clouds in a given satellite imager pixel. Baum et al used CO2 slicing and a spatial coherence method to demonstrate a possible analysis method for nighttime detection of multilayered clouds. Jin and Rossow also used a multispectral CO2 slicing technique for a global analysis of overlapped cloud amount. Lin et al. used a combination infrared (IR), visible (VIS), and microwave data to detect overlapped clouds over water. Recently, Baum and Spinhirne proposed a 1.6 and 11 micron bispectral threshold method. While all of these methods have made progress in solving this stubborn problem none have yet proven satisfactory for continuous and consistent monitoring of multilayer cloud systems. It is clear that detection of overlapping clouds from passive instruments such as satellite radiometers is in an immature stage of development and requires additional research. Overlapped cloud systems also affect the retrievals of cloud properties over the Atmospheric Radiation Measurement (ARM) domains and hence should be identified as accurately as possible. To reach this goal, it is necessary to determine which information can be exploited for detecting multilayered clouds from operational meteorological satellite data used by ARM. This paper examines the potential information available in spectral data available on the Geostationary Operational Environmental Satellite (GOES) imager and the National Oceanic Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) used over the ARM Program's Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites to study the capability of detecting overlapping clouds.

  6. 46 CFR 161.002-9 - Automatic fire detecting system, power supply.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Automatic fire detecting system, power supply. 161.002-9 Section 161.002-9 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT...-9 Automatic fire detecting system, power supply. The power supply for an automatic fire detecting...

  7. 46 CFR 161.002-9 - Automatic fire detecting system, power supply.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 6 2010-10-01 2010-10-01 false Automatic fire detecting system, power supply. 161.002-9 Section 161.002-9 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT...-9 Automatic fire detecting system, power supply. The power supply for an automatic fire detecting...

  8. A Stabilizing Feedback Between Cloud Radiative Effects and Greenland Surface Melt: Verification From Multi-year Automatic Weather Station Measurements

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Wang, W.; van As, D.

    2017-12-01

    Clouds have strong impacts on Greenland's surface melt through the interaction with the dry atmosphere and reflective surfaces. However, their effects are uncertain due to the lack of in situ observations. To better quantify cloud radiative effects (CRE) in Greenland, we analyze and interpret multi-year radiation measurements from 30 automatic weather stations encompassing a broad range of climatological and topographical conditions. During melt season, clouds warm surface over most of Greenland, meaning the longwave greenhouse effect outweighs the shortwave shading effect; on the other hand, the spatial variability of net (longwave and shortwave) CRE is dominated by shortwave CRE and in turn by surface albedo, which controls the potential absorption of solar radiation when clouds are absent. The net warming effect decreases with shortwave CRE from high to low altitudes and from north to south (Fig. 1). The spatial correlation between albedo and net CRE is strong (r=0.93, p<<0.01). In the accumulation zone, the net CRE seasonal trend is controlled by longwave CRE associated with cloud fraction and liquid water content. It becomes stronger from May to July and stays constant in August. In the ablation zone, albedo determines the net CRE seasonal trend, which decreases from May to July and increases afterwards. On an hourly timescale, we find two distinct radiative states in Greenland (Fig. 2). The clear state is characterized by clear-sky conditions or thin clouds, when albedo and solar zenith angle (SZA) weakly correlates with CRE. The cloudy state is characterized by opaque clouds, when the combination of albedo and SZA strongly correlates with CRE (r=0.85, p<0.01). Although cloud properties intrinsically affect CRE, the large melt-season variability of these two non-cloud factors, albedo and solar zenith angle, explains the majority of the CRE variation in spatial distribution, seasonal trend in the ablation zone, and in hourly variability in the cloudy radiative state. Clouds warm the brighter and colder surfaces of Greenland, enhance snow melt, and tend to lower the albedo. Clouds cool the darker and warmer surfaces, inhibiting snow melt, which increases albedo, and thus stabilizes surface melt. This stabilizing mechanism may also occur over sea ice, helping to forestall surface melt as the Arctic becomes dimmer.

  9. A Cloud Mask for AIRS

    NASA Technical Reports Server (NTRS)

    Brubaker, N.; Jedlovec, G. J.

    2004-01-01

    With the preliminary release of AIRS Level 1 and 2 data to the scientific community, there is a growing need for an accurate AIRS cloud mask for data assimilation studies and in producing products derived from cloud free radiances. Current cloud information provided with the AIRS data are limited or based on simplified threshold tests. A multispectral cloud detection approach has been developed for AIRS that utilizes the hyper-spectral capabilities to detect clouds based on specific cloud signatures across the short wave and long wave infrared window regions. This new AIRS cloud mask has been validated against the existing AIRS Level 2 cloud product and cloud information derived from MODIS. Preliminary results for both day and night applications over the continental U.S. are encouraging. Details of the cloud detection approach and validation results will be presented at the conference.

  10. Ice Sheet Temperature Records - Satellite and In Situ Data from Antarctica and Greenland

    NASA Astrophysics Data System (ADS)

    Shuman, C. A.; Comiso, J. C.

    2001-12-01

    Recently completed decadal-length surface temperature records from Antarctica and Greenland are providing insights into the challenge of detecting climate change. Ice and snow cover at high latitudes influence the global climate system by reflecting much of the incoming solar energy back to space. An expected consequence of global warming is a decrease in area covered by snow and ice and an increase in Earth's absorption of solar radiation. Models have predicted that the effects of climate warming may be amplified at high latitudes; thinning of the Greenland ice sheet margins and the breakup of Antarctic Peninsula ice shelves suggest this process may have begun. Satellite data provide an excellent means of observing climate parameters across both long temporal and remote spatial domains but calibration and validation of their data remains a challenge. Infrared sensors can provide excellent temperature information but cloud cover and calibration remain as problems. Passive-microwave sensors can obtain data during the long polar night and through clouds but have calibration issues and a much lower spatial resolution. Automatic weather stations are generally spatially- and temporally-restricted and may have long gaps due to equipment failure. Stable isotopes of oxygen and hydrogen from ice sheet locations provide another means of determining temperature variations with time but are challenging to calibrate to observed temperatures and also represent restricted areas. This presentation will discuss these issues and elaborate on the development and limitations of composite satellite, automatic weather station, and proxy temperature data from selected sites in Antarctica and Greenland.

  11. Examining Dynamical Processes of Tropical Mountain Hydroclimate, Particularly During the Wet Season, Through Integration of Autonomous Sensor Observations and Climate Modeling

    NASA Astrophysics Data System (ADS)

    Hellstrom, R. A.; Fernandez, A.; Mark, B. G.; Covert, J. M.

    2016-12-01

    Peru is facing imminent water resource issues as glaciers retreat and demand increases, yet limited observations and model resolution hamper understanding of hydrometerological processes on local to regional scales. Much of current global and regional climate studies neglect the meteorological forcing of lapse rates (LRs) and valley and slope wind dynamics on critical components of the Peruvian Andes' water-cycle, and herein we emphasize the wet season. In 2004 and 2005 we installed an autonomous sensor network (ASN) within the glacierized Llanganuco Valley, Cordillera Blanca (9°S), consisting of discrete, cost-effective, automatic temperature loggers located along the valley axis and anchored by two automatic weather stations. Comparisons of these embedded hydrometeorological measurements from the ASN and climate modeling by dynamical downscaling using the Weather Research and Forecasting model (WRF) elucidate distinct diurnal and seasonal characteristics of the mountain wind regime and LRs. Wind, temperature, humidity, and cloud simulations suggest that thermally driven up-valley and slope winds converging with easterly flow aloft enhance late afternoon and evening cloud development which helps explain nocturnal wet season precipitation maxima measured by the ASN. Furthermore, the extreme diurnal variability of along-valley-axis LR, and valley wind detected from ground observations and confirmed by dynamical downscaling demonstrate the importance of realistic scale parameterizations of the atmospheric boundary layer to improve regional climate model projections in mountainous regions. We are currently considering to use intermediate climate models such as ICAR to reduce computing cost and we continue to maintain the ASN in the Cordillera Blanca.

  12. Improvements in Night-Time Low Cloud Detection and MODIS-Style Cloud Optical Properties from MSG SEVIRI

    NASA Technical Reports Server (NTRS)

    Wind, Galina (Gala); Platnick, Steven; Riedi, Jerome

    2011-01-01

    The MODIS cloud optical properties algorithm (MOD06IMYD06 for Terra and Aqua MODIS, respectively) slated for production in Data Collection 6 has been adapted to execute using available channels on MSG SEVIRI. Available MODIS-style retrievals include IR Window-derived cloud top properties, using the new Collection 6 cloud top properties algorithm, cloud optical thickness from VISINIR bands, cloud effective radius from 1.6 and 3.7Jlm and cloud ice/water path. We also provide pixel-level uncertainty estimate for successful retrievals. It was found that at nighttime the SEVIRI cloud mask tends to report unnaturally low cloud fraction for marine stratocumulus clouds. A correction algorithm that improves detection of such clouds has been developed. We will discuss the improvements to nighttime low cloud detection for SEVIRI and show examples and comparisons with MODIS and CALIPSO. We will also show examples of MODIS-style pixel-level (Level-2) cloud retrievals for SEVIRI with comparisons to MODIS.

  13. Optical and geometrical properties of cirrus clouds in Amazonia derived from 1 year of ground-based lidar measurements

    NASA Astrophysics Data System (ADS)

    Gouveia, Diego A.; Barja, Boris; Barbosa, Henrique M. J.; Seifert, Patric; Baars, Holger; Pauliquevis, Theotonio; Artaxo, Paulo

    2017-03-01

    Cirrus clouds cover a large fraction of tropical latitudes and play an important role in Earth's radiation budget. Their optical properties, altitude, vertical and horizontal coverage control their radiative forcing, and hence detailed cirrus measurements at different geographical locations are of utmost importance. Studies reporting cirrus properties over tropical rain forests like the Amazon, however, are scarce. Studies with satellite profilers do not give information on the diurnal cycle, and the satellite imagers do not report on the cloud vertical structure. At the same time, ground-based lidar studies are restricted to a few case studies. In this paper, we derive the first comprehensive statistics of optical and geometrical properties of upper-tropospheric cirrus clouds in Amazonia. We used 1 year (July 2011 to June 2012) of ground-based lidar atmospheric observations north of Manaus, Brazil. This dataset was processed by an automatic cloud detection and optical properties retrieval algorithm. Upper-tropospheric cirrus clouds were observed more frequently than reported previously for tropical regions. The frequency of occurrence was found to be as high as 88 % during the wet season and not lower than 50 % during the dry season. The diurnal cycle shows a minimum around local noon and maximum during late afternoon, associated with the diurnal cycle of precipitation. The mean values of cirrus cloud top and base heights, cloud thickness, and cloud optical depth were 14.3 ± 1.9 (SD) km, 12.9 ± 2.2 km, 1.4 ± 1.1 km, and 0.25 ± 0.46, respectively. Cirrus clouds were found at temperatures down to -90 °C. Frequently cirrus were observed within the tropical tropopause layer (TTL), which are likely associated to slow mesoscale uplifting or to the remnants of overshooting convection. The vertical distribution was not uniform, and thin and subvisible cirrus occurred more frequently closer to the tropopause. The mean lidar ratio was 23.3 ± 8.0 sr. However, for subvisible cirrus clouds a bimodal distribution with a secondary peak at about 44 sr was found suggesting a mixed composition. A dependence of the lidar ratio with cloud temperature (altitude) was not found, indicating that the clouds are vertically well mixed. The frequency of occurrence of cirrus clouds classified as subvisible (τ < 0. 03) were 41.6 %, whilst 37.8 % were thin cirrus (0. 03 < τ < 0. 3) and 20.5 % opaque cirrus (τ > 0. 3). Hence, in central Amazonia not only a high frequency of cirrus clouds occurs, but also a large fraction of subvisible cirrus clouds. This high frequency of subvisible cirrus clouds may contaminate aerosol optical depth measured by sun photometers and satellite sensors to an unknown extent.

  14. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.

  15. Orographic enhancement of rainfalls in the Rio San Francisco valley in southern Ecuador

    NASA Astrophysics Data System (ADS)

    Trachte, K.; Rollenbeck, R.; Bendix, J.

    2012-04-01

    In a tropical mountain rain forest in southern Ecuador diurnal dynamics of cloud development and precipitation behavior is investigated in the framework of the DFG research unit 816. With automatic climate stations and rain radar rainfalls in the Rio San Francisco valley are recorded. The observations showed the typical tropical late afternoon convective precipitation as well as local events such as mountain valley breezes and luv-lee effects. Additionally, the data revealed an unusually early morning peak that could be recognized as convective rainfalls. On the basis of GOES-E satellite imagery these rainfalls could be traced back to nocturnal convective clouds at the eastern Andes Mountains. There are some explanations for the occurrence of the clouds: One already examined mechanism is a katabatic induced cold front at the foothills of the Andes in the Peruvian Amazon basin. In this region the mountains form a quasi-concave configuration that contributes to a convergence of cold air drainage with subsequent convective activities. Another explanation for the events is the orographic enhancement by a local seeder-feeder mechanism. Mesoscale convective systems from the Amazon basin are transported to the west via the trade winds. At the Andes Mountains the complex and massive orography acts like a barrier to the clouds. The result is a disconnection of the upper part of the cloud from the lower part. The latter rains out at the eastern slopes and the upper cloud is transported further to the west. There it acts like a seeder to lower level clouds, i. e. the feeder. With the numerical model ARPS (Advanced Regional Prediction System) this procedure is investigated on the basis of two case studies. The events are detected and selected through the analysis of GOES-E brightness temperatures. They are also used to compare and validate the results of the model. Finally, the orographic enhancement of the clouds is examined. By using a vertically pointing radar the development of the resulting precipitation is analyzed and discussed in the context of a seeder-feeder mechanism.

  16. A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.

    2014-12-01

    A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.

  17. Detection of Single Tree Stems in Forested Areas from High Density ALS Point Clouds Using 3d Shape Descriptors

    NASA Astrophysics Data System (ADS)

    Amiri, N.; Polewski, P.; Yao, W.; Krzystek, P.; Skidmore, A. K.

    2017-09-01

    Airborne Laser Scanning (ALS) is a widespread method for forest mapping and management purposes. While common ALS techniques provide valuable information about the forest canopy and intermediate layers, the point density near the ground may be poor due to dense overstory conditions. The current study highlights a new method for detecting stems of single trees in 3D point clouds obtained from high density ALS with a density of 300 points/m2. Compared to standard ALS data, due to lower flight height (150-200 m) this elevated point density leads to more laser reflections from tree stems. In this work, we propose a three-tiered method which works on the point, segment and object levels. First, for each point we calculate the likelihood that it belongs to a tree stem, derived from the radiometric and geometric features of its neighboring points. In the next step, we construct short stem segments based on high-probability stem points, and classify the segments by considering the distribution of points around them as well as their spatial orientation, which encodes the prior knowledge that trees are mainly vertically aligned due to gravity. Finally, we apply hierarchical clustering on the positively classified segments to obtain point sets corresponding to single stems, and perform ℓ1-based orthogonal distance regression to robustly fit lines through each stem point set. The ℓ1-based method is less sensitive to outliers compared to the least square approaches. From the fitted lines, the planimetric tree positions can then be derived. Experiments were performed on two plots from the Hochficht forest in Oberösterreich region located in Austria.We marked a total of 196 reference stems in the point clouds of both plots by visual interpretation. The evaluation of the automatically detected stems showed a classification precision of 0.86 and 0.85, respectively for Plot 1 and 2, with recall values of 0.7 and 0.67.

  18. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    NASA Astrophysics Data System (ADS)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  19. Comparison of Cloud Detection Using the CERES-MODIS Ed4 and LaRC AVHRR Cloud Masks and CALIPSO Vertical Feature Mask

    NASA Astrophysics Data System (ADS)

    Trepte, Q. Z.; Minnis, P.; Palikonda, R.; Bedka, K. M.; Sun-Mack, S.

    2011-12-01

    Accurate detection of cloud amount and distribution using satellite observations is crucial in determining cloud radiative forcing and earth energy budget. The CERES-MODIS (CM) Edition 4 cloud mask is a global cloud detection algorithm for application to Terra and Aqua MODIS data with the aid of other ancillary data sets. It is used operationally for the NASA's Cloud and Earth's Radiant Energy System (CERES) project. The LaRC AVHRR cloud mask, which uses only five spectral channels, is based on a subset of the CM cloud mask which employs twelve MODIS channels. The LaRC mask is applied to AVHRR data for the NOAA Climate Data Record Program. Comparisons among the CM Ed4, and LaRC AVHRR cloud masks and the CALIPSO Vertical Feature Mask (VFM) constitute a powerful means for validating and improving cloud detection globally. They also help us understand the strengths and limitations of the various cloud retrievals which use either active and passive satellite sensors. In this paper, individual comparisons will be presented for different types of clouds over various surfaces, including daytime and nighttime, and polar and non-polar regions. Additionally, the statistics of the global, regional, and zonal cloud occurrence and amount from the CERES Ed4, AVHRR cloud masks and CALIPSO VFM will be discussed.

  20. SAR processing in the cloud for oil detection in the Arctic

    NASA Astrophysics Data System (ADS)

    Garron, J.; Stoner, C.; Meyer, F. J.

    2016-12-01

    A new world of opportunity is being thawed from the ice of the Arctic, driven by decreased persistent Arctic sea-ice cover, increases in shipping, tourism, natural resource development. Tools that can automatically monitor key sea ice characteristics and potential oil spills are essential for safe passage in these changing waters. Synthetic aperture radar (SAR) data can be used to discriminate sea ice types and oil on the ocean surface and also for feature tracking. Additionally, SAR can image the earth through the night and most weather conditions. SAR data is volumetrically large and requires significant computing power to manipulate. Algorithms designed to identify key environmental features, like oil spills, in SAR imagery require secondary processing, and are computationally intensive, which can functionally limit their application in a real-time setting. Cloud processing is designed to manage big data and big data processing jobs by means of small cycles of off-site computations, eliminating up-front hardware costs. Pairing SAR data with cloud processing has allowed us to create and solidify a processing pipeline for SAR data products in the cloud to compare operational algorithms efficiency and effectiveness when run using an Alaska Satellite Facility (ASF) defined Amazon Machine Image (AMI). The products created from this secondary processing, were compared to determine which algorithm was most accurate in Arctic feature identification, and what operational conditions were required to produce the results on the ASF defined AMI. Results will be used to inform a series of recommendations to oil-spill response data managers and SAR users interested in expanding their analytical computing power.

  1. Real-Time Estimation of Volcanic ASH/SO2 Cloud Height from Combined Uv/ir Satellite Observations and Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Vicente, Gilberto A.

    An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.

  2. Cloud-Based CT Dose Monitoring using the DICOM-Structured Report: Fully Automated Analysis in Regard to National Diagnostic Reference Levels.

    PubMed

    Boos, J; Meineke, A; Rubbert, C; Heusch, P; Lanzman, R S; Aissa, J; Antoch, G; Kröpil, P

    2016-03-01

    To implement automated CT dose data monitoring using the DICOM-Structured Report (DICOM-SR) in order to monitor dose-related CT data in regard to national diagnostic reference levels (DRLs). We used a novel in-house co-developed software tool based on the DICOM-SR to automatically monitor dose-related data from CT examinations. The DICOM-SR for each CT examination performed between 09/2011 and 03/2015 was automatically anonymized and sent from the CT scanners to a cloud server. Data was automatically analyzed in accordance with body region, patient age and corresponding DRL for volumetric computed tomography dose index (CTDIvol) and dose length product (DLP). Data of 36,523 examinations (131,527 scan series) performed on three different CT scanners and one PET/CT were analyzed. The overall mean CTDIvol and DLP were 51.3% and 52.8% of the national DRLs, respectively. CTDIvol and DLP reached 43.8% and 43.1% for abdominal CT (n=10,590), 66.6% and 69.6% for cranial CT (n=16,098) and 37.8% and 44.0% for chest CT (n=10,387) of the compared national DRLs, respectively. Overall, the CTDIvol exceeded national DRLs in 1.9% of the examinations, while the DLP exceeded national DRLs in 2.9% of the examinations. Between different CT protocols of the same body region, radiation exposure varied up to 50% of the DRLs. The implemented cloud-based CT dose monitoring based on the DICOM-SR enables automated benchmarking in regard to national DRLs. Overall the local dose exposure from CT reached approximately 50% of these DRLs indicating that DRL actualization as well as protocol-specific DRLs are desirable. The cloud-based approach enables multi-center dose monitoring and offers great potential to further optimize radiation exposure in radiological departments. • The newly developed software based on the DICOM-Structured Report enables large-scale cloud-based CT dose monitoring • The implemented software solution enables automated benchmarking in regard to national DRLs • The local radiation exposure from CT reached approximately 50 % of the national DRLs • The cloud-based approach offers great potential for multi-center dose analysis. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Ground-based Nighttime Cloud Detection Using a Commercial Digital Camera: Observations at Manila Observatory (14.64N, 121.07E)

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.

    2014-12-01

    Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud occurrence more than what is shown in Fig. 1c. These graphs show the capability of the camera to detect and measure the cloud occurrence at nighttime. Continuous collection of nighttime pictures is currently implemented. In regions where there is a dearth of scientific data, the measured nighttime cloud occurrence will serve as a baseline for future cloud studies in this part of the world.

  4. Spatially Varying Spectrally Thresholds for MODIS Cloud Detection

    NASA Technical Reports Server (NTRS)

    Haines, S. L.; Jedlovec, G. J.; Lafontaine, F.

    2004-01-01

    The EOS science team has developed an elaborate global MODIS cloud detection procedure, and the resulting MODIS product (MOD35) is used in the retrieval process of several geophysical parameters to mask out clouds. While the global application of the cloud detection approach appears quite robust, the product has some shortcomings on the regional scale, often over determining clouds in a variety of settings, particularly at night. This over-determination of clouds can cause a reduction in the spatial coverage of MODIS derived clear-sky products. To minimize this problem, a new regional cloud detection method for use with MODIS data has been developed at NASA's Global Hydrology and Climate Center (GHCC). The approach is similar to that used by the GHCC for GOES data over the continental United States. Several spatially varying thresholds are applied to MODIS spectral data to produce a set of tests for detecting clouds. The thresholds are valid for each MODIS orbital pass, and are derived from 20-day composites of GOES channels with similar wavelengths to MODIS. This paper and accompanying poster will introduce the GHCC MODIS cloud mask, provide some examples, and present some preliminary validation.

  5. Multilayer Cloud Detection with the MODIS Near-Infrared Water Vapor Absorption Band

    NASA Technical Reports Server (NTRS)

    Wind, Galina; Platnick, Steven; King, Michael D.; Hubanks, Paul A,; Pavolonis, Michael J.; Heidinger, Andrew K.; Yang, Ping; Baum, Bryan A.

    2009-01-01

    Data Collection 5 processing for the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the NASA Earth Observing System EOS Terra and Aqua spacecraft includes an algorithm for detecting multilayered clouds in daytime. The main objective of this algorithm is to detect multilayered cloud scenes, specifically optically thin ice cloud overlying a lower-level water cloud, that presents difficulties for retrieving cloud effective radius using single layer plane-parallel cloud models. The algorithm uses the MODIS 0.94 micron water vapor band along with CO2 bands to obtain two above-cloud precipitable water retrievals, the difference of which, in conjunction with additional tests, provides a map of where multilayered clouds might potentially exist. The presence of a multilayered cloud results in a large difference in retrievals of above-cloud properties between the CO2 and the 0.94 micron methods. In this paper the MODIS multilayered cloud algorithm is described, results of using the algorithm over example scenes are shown, and global statistics for multilayered clouds as observed by MODIS are discussed. A theoretical study of the algorithm behavior for simulated multilayered clouds is also given. Results are compared to two other comparable passive imager methods. A set of standard cloudy atmospheric profiles developed during the course of this investigation is also presented. The results lead to the conclusion that the MODIS multilayer cloud detection algorithm has some skill in identifying multilayered clouds with different thermodynamic phases

  6. Automated cloud and shadow detection and filling using two-date Landsat imagery in the United States

    USGS Publications Warehouse

    Jin, Suming; Homer, Collin G.; Yang, Limin; Xian, George; Fry, Joyce; Danielson, Patrick; Townsend, Philip A.

    2013-01-01

    A simple, efficient, and practical approach for detecting cloud and shadow areas in satellite imagery and restoring them with clean pixel values has been developed. Cloud and shadow areas are detected using spectral information from the blue, shortwave infrared, and thermal infrared bands of Landsat Thematic Mapper or Enhanced Thematic Mapper Plus imagery from two dates (a target image and a reference image). These detected cloud and shadow areas are further refined using an integration process and a false shadow removal process according to the geometric relationship between cloud and shadow. Cloud and shadow filling is based on the concept of the Spectral Similarity Group (SSG), which uses the reference image to find similar alternative pixels in the target image to serve as replacement values for restored areas. Pixels are considered to belong to one SSG if the pixel values from Landsat bands 3, 4, and 5 in the reference image are within the same spectral ranges. This new approach was applied to five Landsat path/rows across different landscapes and seasons with various types of cloud patterns. Results show that almost all of the clouds were captured with minimal commission errors, and shadows were detected reasonably well. Among five test scenes, the lowest producer's accuracy of cloud detection was 93.9% and the lowest user's accuracy was 89%. The overall cloud and shadow detection accuracy ranged from 83.6% to 99.3%. The pixel-filling approach resulted in a new cloud-free image that appears seamless and spatially continuous despite differences in phenology between the target and reference images. Our methods offer a straightforward and robust approach for preparing images for the new 2011 National Land Cover Database production.

  7. Cloud-ECG for real time ECG monitoring and analysis.

    PubMed

    Xia, Henian; Asif, Irfan; Zhao, Xiaopeng

    2013-06-01

    Recent advances in mobile technology and cloud computing have inspired numerous designs of cloud-based health care services and devices. Within the cloud system, medical data can be collected and transmitted automatically to medical professionals from anywhere and feedback can be returned to patients through the network. In this article, we developed a cloud-based system for clients with mobile devices or web browsers. Specially, we aim to address the issues regarding the usefulness of the ECG data collected from patients themselves. Algorithms for ECG enhancement, ECG quality evaluation and ECG parameters extraction were implemented in the system. The system was demonstrated by a use case, in which ECG data was uploaded to the web server from a mobile phone at a certain frequency and analysis was performed in real time using the server. The system has been proven to be functional, accurate and efficient. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  9. 46 CFR 78.47-13 - Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...

  10. 46 CFR 78.47-13 - Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...

  11. 46 CFR 78.47-13 - Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...

  12. 46 CFR 78.47-13 - Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...

  13. 46 CFR 78.47-13 - Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...

  14. Cloud Environment Automation: from infrastructure deployment to application monitoring

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.

    2017-10-01

    The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.

  15. Detecting Super-Thin Clouds With Polarized Light

    NASA Technical Reports Server (NTRS)

    Sun, Wenbo; Videen, Gorden; Mishchenko, Michael I.

    2014-01-01

    We report a novel method for detecting cloud particles in the atmosphere. Solar radiation backscattered from clouds is studied with both satellite data and a radiative transfer model. A distinct feature is found in the angle of linear polarization of solar radiation that is backscattered from clouds. The dominant backscattered electric field from the clear-sky Earth-atmosphere system is nearly parallel to the Earth surface. However, when clouds are present, this electric field can rotate significantly away from the parallel direction. Model results demonstrate that this polarization feature can be used to detect super-thin cirrus clouds having an optical depth of only 0.06 and super-thin liquid water clouds having an optical depth of only 0.01. Such clouds are too thin to be sensed using any current passive satellite instruments.

  16. Detecting Super-Thin Clouds with Polarized Sunlight

    NASA Technical Reports Server (NTRS)

    Sun, Wenbo; Videen, Gorden; Mishchenko, Michael I.

    2014-01-01

    We report a novel method for detecting cloud particles in the atmosphere. Solar radiation backscattered from clouds is studied with both satellite data and a radiative transfer model. A distinct feature is found in the angle of linear polarization of solar radiation that is backscattered from clouds. The dominant backscattered electric field from the clear-sky Earth-atmosphere system is nearly parallel to the Earth surface. However, when clouds are present, this electric field can rotate significantly away from the parallel direction. Model results demonstrate that this polarization feature can be used to detect super-thin cirrus clouds having an optical depth of only 0.06 and super-thin liquid water clouds having an optical depth of only 0.01. Such clouds are too thin to be sensed using any current passive satellite instruments.

  17. Automated cloud screening of AVHRR imagery using split-and-merge clustering

    NASA Technical Reports Server (NTRS)

    Gallaudet, Timothy C.; Simpson, James J.

    1991-01-01

    Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.

  18. Comparison of monthly nighttime cloud fraction products from MODIS and AIRS and ground-based camera over Manila Observatory (14.64N, 121.07E)

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2017-12-01

    Cloud detection nowadays is primarily achieved by the utilization of various sensors aboard satellites. These include MODIS Aqua, MODIS Terra, and AIRS with products that include nighttime cloud fraction. Ground-based instruments are, however, only secondary to these satellites when it comes to cloud detection. Nonetheless, these ground-based instruments (e.g., LIDARs, ceilometers, and sky-cameras) offer significant datasets about a particular region's cloud cover values. For nighttime operations of cloud detection instruments, satellite-based instruments are more reliably and prominently used than ground-based ones. Therefore if a ground-based instrument for nighttime operations is operated, it ought to produce reliable scientific datasets. The objective of this study is to do a comparison between the results of a nighttime ground-based instrument (sky-camera) and that of MODIS Aqua and MODIS Terra. A Canon Powershot A2300 is placed ontop of Manila Observatory (14.64N, 121.07E) and is configured to take images of the night sky at 5min intervals. To detect pixels with clouds, the pictures are converted to grayscale format. Thresholding technique is used to screen pixels with cloud and pixels without clouds. If the pixel value is greater than 17, it is considered as a cloud; otherwise, a noncloud (Gacal et al., 2016). This algorithm is applied to the data gathered from Oct 2015 to Oct 2016. A scatter plot between satellite cloud fraction in the area covering the area 14.2877N, 120.9869E, 14.7711N and 121.4539E and ground cloud cover is graphed to find the monthly correlation. During wet season (June - November), the satellite nighttime cloud fraction vs ground measured cloud cover produce an acceptable R2 (Aqua= 0.74, Terra= 0.71, AIRS= 0.76). However, during dry season, poor R2 values are obtained (AIRS= 0.39, Aqua & Terra = 0.01). The high correlation during wet season can be attributed to a high probability that the camera and satellite see the same clouds. However during dry season, the satellite sees high altitude clouds and the camera can not detect these clouds from the ground as it relies on city lights reflected from low level clouds. With this acknowledged disparity, the ground-based camera has the advantage of detecting haze and thin clouds near the ground that are hardly or not detected by the satellites.

  19. Morphological diagnostics of star formation in molecular clouds

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher Norris

    Molecular clouds are the birth sites of all star formation in the present-day universe. They represent the initial conditions of star formation, and are the primary medium by which stars transfer energy and momentum back to parsec scales. Yet, the physical evolution of molecular clouds remains poorly understood. This is not due to a lack of observational data, nor is it due to an inability to simulate the conditions inside molecular clouds. Instead, the physics and structure of the interstellar medium are sufficiently complex that interpreting molecular cloud data is very difficult. This dissertation mitigates this problem, by developing more sophisticated ways to interpret morphological information in molecular cloud observations and simulations. In particular, I have focused on leveraging machine learning techniques to identify physically meaningful substructures in the interstellar medium, as well as techniques to inter-compare molecular cloud simulations to observations. These contributions make it easier to understand the interplay between molecular clouds and star formation. Specific contributions include: new insight about the sheet-like geometry of molecular clouds based on observations of stellar bubbles; a new algorithm to disambiguate overlapping yet morphologically distinct cloud structures; a new perspective on the relationship between molecular cloud column density distributions and the sizes of cloud substructures; a quantitative analysis of how projection effects affect measurements of cloud properties; and an automatically generated, statistically-calibrated catalog of bubbles identified from their infrared morphologies.

  20. Multi-temporal thermal analyses for submarine groundwater discharge (SGD) detection over large spatial scales in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hennig, Hanna; Mallast, Ulf; Merz, Ralf

    2015-04-01

    Submarine groundwater discharge (SGD) sites act as important pathways for nutrients and contaminants that deteriorate marine ecosystems. In the Mediterranean it is estimated that 75% of freshwater input is contributed from karst aquifers. Thermal remote sensing can be used for a pre-screening of potential SGD sites in order to optimize field surveys. Although different platforms (ground-, air- and spaceborne) may serve for thermal remote sensing, the most cost-effective are spaceborne platforms (satellites) that likewise cover the largest spatial scale (>100 km per image). Therefore an automatized and objective approach that uses thermal satellite images from Landsat 7 and Landsat 8 was used to localize potential SGD sites on a large spatial scale. The method using descriptive statistic parameter specially range and standard deviation by (Mallast et al., 2014) was adapted to the Mediterranean Sea. Since the method was developed for the Dead Sea were satellite images with cloud cover are rare and no sea level change occurs through tidal cycles it was essential to adapt the method to a region where tidal cycles occur and cloud cover is more frequent . These adaptations include: (1) an automatic and adaptive coastline detection (2) include and process cloud covered scenes to enlarge the data basis, (3) implement tidal data in order to analyze low tide images as SGD is enhanced during these phases and (4) test the applicability for Landsat 8 images that will provide data in the future once Landsat 7 stops working. As previously shown, the range method shows more accurate results compared to the standard deviation. However, the result exclusively depends on two scenes (minimum and maximum) and is largely influenced by outliers. Counteracting on this drawback we developed a new approach. Since it is assumed that sea surface temperature (SST) is stabilized by groundwater at SGD sites, the slope of a bootstrapped linear model fitted to sorted SST per pixel would be less steep than the slope of the surrounding area, resulting in less influence through outliers and an equal weighting of all integrated scenes. Both methods could be used to detect SGD sites in the Mediterranean regardless to the discharge characteristics (diffuse and focused) exceptions are sites with deep emergences. Better results could be shown in bays compared to more exposed sites. Since the range of the SST is mostly influenced by maximum and minimum of the scenes, the slope approach can be seen as a more representative method using all scenes. References: Mallast, U., Gloaguen, R., Friesen, J., Rödiger, T., Geyer, S., Merz, R., Siebert, C., 2014. How to identify groundwater-caused thermal anomalies in lakes based on multi-temporal satellite data in semi-arid regions. Hydrol. Earth Syst. Sci. 18 (7), 2773-2787.

  1. Molecular clouds without detectable CO

    NASA Technical Reports Server (NTRS)

    Blitz, Leo; Bazell, David; Desert, F. Xavier

    1990-01-01

    The clouds identified by Desert, Bazell, and Boulanger (DBB clouds) in their search for high-latitude molecular clouds were observed in the CO (J = 1-0) line, but only 13 percent of the sample was detected. The remaining 87 percent are diffuse molecular clouds with CO abundances of about 10 to the -6th, a typical value for diffuse clouds. This hypothesis is shown to be consistent with Copernicus data. The DBB clouds are shown to ben an essentially complete catalog of diffuse molecular clouds in the solar vicinity. The total molecular surface density in the vicinity of the sun is then only about 20 percent greater than the 1.3 solar masses/sq pc determined by Dame et al. (1987). Analysis of the CO detections indicates that there is a sharp threshold in extinction of 0.25 mag before CO is detectable and is derived from the IRAS I(100) micron threshold of 4 MJy/sr. This threshold is presumably where the CO abundance exhibits a sharp increase

  2. Molecular clouds without detectable CO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blitz, L.; Bazell, D.; Desert, F.X.

    1990-03-01

    The clouds identified by Desert, Bazell, and Boulanger (DBB clouds) in their search for high-latitude molecular clouds were observed in the CO (J = 1-0) line, but only 13 percent of the sample was detected. The remaining 87 percent are diffuse molecular clouds with CO abundances of about 10 to the -6th, a typical value for diffuse clouds. This hypothesis is shown to be consistent with Copernicus data. The DBB clouds are shown to be an essentially complete catalog of diffuse molecular clouds in the solar vicinity. The total molecular surface density in the vicinity of the sun is thenmore » only about 20 percent greater than the 1.3 solar masses/sq pc determined by Dame et al. (1987). Analysis of the CO detections indicates that there is a sharp threshold in extinction of 0.25 mag before CO is detectable and is derived from the IRAS I(100) micron threshold of 4 MJy/sr. This threshold is presumably where the CO abundance exhibits a sharp increase 18 refs.« less

  3. The Dependence of Cloud Property Trend Detection on Absolute Calibration Accuracy of Passive Satellite Sensors

    NASA Astrophysics Data System (ADS)

    Shea, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Zelinka, M. D.

    2016-12-01

    Detecting trends in climate variables on global, decadal scales requires highly accurate, stable measurements and retrieval algorithms. Trend uncertainty depends on its magnitude, natural variability, and instrument and retrieval algorithm accuracy and stability. We applied a climate accuracy framework to quantify the impact of absolute calibration on cloud property trend uncertainty. The cloud properties studied were cloud fraction, effective temperature, optical thickness, and effective radius retrieved using the Clouds and the Earth's Radiant Energy System (CERES) Cloud Property Retrieval System, which uses Moderate-resolution Imaging Spectroradiometer measurements (MODIS). Modeling experiments from the fifth phase of the Climate Model Intercomparison Project (CMIP5) agree that net cloud feedback is likely positive but disagree regarding its magnitude, mainly due to uncertainty in shortwave cloud feedback. With the climate accuracy framework we determined the time to detect trends for instruments with various calibration accuracies. We estimated a relationship between cloud property trend uncertainty, cloud feedback, and Equilibrium Climate Sensitivity and also between effective radius trend uncertainty and aerosol indirect effect trends. The direct relationship between instrument accuracy requirements and climate model output provides the level of instrument absolute accuracy needed to reduce climate model projection uncertainty. Different cloud types have varied radiative impacts on the climate system depending on several attributes, such as their thermodynamic phase, altitude, and optical thickness. Therefore, we also conducted these studies by cloud types for a clearer understanding of instrument accuracy requirements needed to detect changes in their cloud properties. Combining this information with the radiative impact of different cloud types helps to prioritize among requirements for future satellite sensors and understanding the climate detection capabilities of existing sensors.

  4. Comparison of cloud optical depth and cloud mask applying BRDF model-based background surface reflectance

    NASA Astrophysics Data System (ADS)

    Kim, H. W.; Yeom, J. M.; Woo, S. H.

    2017-12-01

    Over the thin cloud region, satellite can simultaneously detect the reflectance from thin clouds and land surface. Since the mixed reflectance is not the exact cloud information, the background surface reflectance should be eliminated to accurately distinguish thin cloud such as cirrus. In the previous research, Kim et al (2017) was developed the cloud masking algorithm using the Geostationary Ocean Color Imager (GOCI), which is one of significant instruments for Communication, Ocean, and Meteorology Satellite (COMS). Although GOCI has 8 spectral channels including visible and near infra-red spectral ranges, the cloud masking has quantitatively reasonable result when comparing with MODIS cloud mask (Collection 6 MYD35). Especially, we noticed that this cloud masking algorithm is more specialized in thin cloud detections through the validation with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data. Because this cloud masking method was concentrated on eliminating background surface effects from the top-of-atmosphere (TOA) reflectance. Applying the difference between TOA reflectance and the bi-directional reflectance distribution function (BRDF) model-based background surface reflectance, cloud areas both thick cloud and thin cloud can be discriminated without infra-red channels which were mostly used for detecting clouds. Moreover, when the cloud mask result was utilized as the input data when simulating BRDF model and the optimized BRDF model-based surface reflectance was used for the optimized cloud masking, the probability of detection (POD) has higher value than POD of the original cloud mask. In this study, we examine the correlation between cloud optical depth (COD) and its cloud mask result. Cloud optical depths mostly depend on the cloud thickness, the characteristic of contents, and the size of cloud contents. COD ranges from less than 0.1 for thin clouds to over 1000 for the huge cumulus due to scattering by droplets. With the cloud optical depth of CALIPSO, the cloud masking result can be more improved since we can figure out how deep cloud is. To validate the cloud mask and the correlation result, the atmospheric retrieval will be computed to compare the difference between TOA reflectance and the simulated surface reflectance.

  5. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    DOE PAGES

    Ahn, M. H.; Han, D.; Won, H. Y.; ...

    2015-02-03

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less

  6. Comparison of the MODIS Multilayer Cloud Detection and Thermodynamic Phase Products with CALIPSO and CloudSat

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Gala; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.

    2008-01-01

    CALIPSO and CloudSat, launched in June 2006, provide global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the "Collection 5" stream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 h resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, and CloudSat radar measurements, we investigate the global performance of the thermodynamic phase and multilayer cloud detection algorithms.

  7. A Chain of Modeling Tools For Gas and Aqueous Phase Chemstry

    NASA Astrophysics Data System (ADS)

    Audiffren, N.; Djouad, R.; Sportisse, B.

    Atmospheric chemistry is characterized by the use of large set of chemical species and reactions. Handling with the set of data required for the definition of the model is a quite difficult task. We prsent in this short article a preprocessor for diphasic models (gas phase and aqueous phase in cloud droplets) named SPACK. The main interest of SPACK is the automatic generation of lumped species related to fast equilibria. We also developped a linear tangent model using the automatic differentiation tool named ODYSSEE in order to perform a sensitivity analysis of an atmospheric multi- phase mechanism based on RADM2 kinetic scheme.Local sensitivity coefficients are computed for two different scenarii. We focus in this study on the sensitivity of the ozone,NOx,HOx, system with respect to some aqueous phase reactions and we inves- tigate the influence of the reduction in the photolysis rates in the area below the cloud region.

  8. Cafe: A Generic Configurable Customizable Composite Cloud Application Framework

    NASA Astrophysics Data System (ADS)

    Mietzner, Ralph; Unger, Tobias; Leymann, Frank

    In this paper we present Cafe (Composite Application Framework) an approach to describe configurable composite service-oriented applications and to automatically provision them across different providers. Cafe enables independent software vendors to describe their composite service-oriented applications and the components that are used to assemble them. Components can be internal to the application or external and can be deployed in any of the delivery models present in the cloud. The components are annotated with requirements for the infrastructure they later need to be run on. Providers on the other hand advertise their infrastructure services by describing them as infrastructure capabilities. The separation of software vendors and providers enables end users and providers to follow a best-of-breed strategy by combining arbitrary applications with arbitrary providers. We show how such applications can be automatically provisioned and present an architecture and a prototype that implements the concepts.

  9. MR-based detection of individual histotripsy bubble clouds formed in tissues and phantoms.

    PubMed

    Allen, Steven P; Hernandez-Garcia, Luis; Cain, Charles A; Hall, Timothy L

    2016-11-01

    To demonstrate that MR sequences can detect individual histotripsy bubble clouds formed inside intact tissues. A line-scan and an EPI sequence were sensitized to histotripsy by inserting a bipolar gradient whose lobes bracketed the lifespan of a histotripsy bubble cloud. Using a 7 Tesla, small-bore scanner, these sequences monitored histotripsy clouds formed in an agar phantom and in vitro porcine liver and brain. The bipolar gradients were adjusted to apply phase with k-space frequencies of 10, 300 or 400 cm -1 . Acoustic pressure amplitude was also varied. Cavitation was simultaneously monitored using a passive cavitation detection system. Each image captured local signal loss specific to an individual bubble cloud. In the agar phantom, this signal loss appeared only when the transducer output exceeded the cavitation threshold pressure. In tissues, bubble clouds were immediately detected when the gradients created phase with k-space frequencies of 300 and 400 cm -1 . When the gradients created phase with a k-space frequency of 10 cm -1 , individual bubble clouds were not detectable until many acoustic pulses had been applied to the tissue. Cavitation-sensitive MR-sequences can detect single histotripsy bubble clouds formed in biologic tissue. Detection is influenced by the sensitizing gradients and treatment history. Magn Reson Med 76:1486-1493, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.

  10. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  11. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  12. Automated detection of cloud and cloud-shadow in single-date Landsat imagery using neural networks and spatial post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael J.; Hayes, Daniel J

    2014-01-01

    Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less

  13. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  14. AATSR Based Volcanic Ash Plume Top Height Estimation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Sundstrom, Anu-Maija; Rodriguez, Edith; de Leeuw, Gerrit

    2015-11-01

    The AATSR Correlation Method (ACM) height estimation algorithm is presented. The algorithm uses Advanced Along Track Scanning Radiometer (AATSR) satellite data to detect volcanic ash plumes and to estimate the plume top height. The height estimate is based on the stereo-viewing capability of the AATSR instrument, which allows to determine the parallax between the satellite's nadir and 55◦ forward views, and thus the corresponding height. AATSR provides an advantage compared to other stereo-view satellite instruments: with AATSR it is possible to detect ash plumes using brightness temperature difference between thermal infrared (TIR) channels centered at 11 and 12 μm. The automatic ash detection makes the algorithm efficient in processing large quantities of data: the height estimate is calculated only for the ash-flagged pixels. Besides ash plumes, the algorithm can be applied to any elevated feature with sufficient contrast to the background, such as smoke and dust plumes and clouds. The ACM algorithm can be applied to the Sea and Land Surface Temperature Radiometer (SLSTR), scheduled for launch at the end of 2015.

  15. Evaluation of Passive Multilayer Cloud Detection Using Preliminary CloudSat and CALIPSO Cloud Profiles

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.

    2006-12-01

    During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.

  16. Evaluation of Passive Multilayer Cloud Detection Using Preliminary CloudSat and CALIPSO Cloud Profiles

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.

    2005-05-01

    During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.

  17. An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.

    2003-01-01

    An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.

  18. Enhancing a Simple MODIS Cloud Mask Algorithm for the Landsat Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Wilson, Michael J.; Oreopoulos, Lazarous

    2011-01-01

    The presence of clouds in images acquired by the Landsat series of satellites is usually an undesirable, but generally unavoidable fact. With the emphasis of the program being on land imaging, the suspended liquid/ice particles of which clouds are made of fully or partially obscure the desired observational target. Knowing the amount and location of clouds in a Landsat scene is therefore valuable information for scene selection, for making clear-sky composites from multiple scenes, and for scheduling future acquisitions. The two instruments in the upcoming Landsat Data Continuity Mission (LDCM) will include new channels that will enhance our ability to detect high clouds which are often also thin in the sense that a large fraction of solar radiation can pass through them. This work studies the potential impact of these new channels on enhancing LDCM's cloud detection capabilities compared to previous Landsat missions. We revisit a previously published scheme for cloud detection and add new tests to capture more of the thin clouds that are harder to detect with the more limited arsenal channels. Since there are no Landsat data yet that include the new LDCM channels, we resort to data from another instrument, MODIS, which has these bands, as well as the other bands of LDCM, to test the capabilities of our new algorithm. By comparing our revised scheme's performance against the performance of the official MODIS cloud detection scheme, we conclude that the new scheme performs better than the earlier scheme which was not very good at thin cloud detection.

  19. 46 CFR 161.002-2 - Types of fire-protective systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., but not be limited to, automatic fire and smoke detecting systems, manual fire alarm systems, sample extraction smoke detection systems, watchman's supervisory systems, and combinations of these systems. (b) Automatic fire detecting systems. For the purpose of this subpart, automatic fire and smoke detecting...

  20. 46 CFR 161.002-2 - Types of fire-protective systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., but not be limited to, automatic fire and smoke detecting systems, manual fire alarm systems, sample extraction smoke detection systems, watchman's supervisory systems, and combinations of these systems. (b) Automatic fire detecting systems. For the purpose of this subpart, automatic fire and smoke detecting...

  1. Efficient hybrid monocular-stereo approach to on-board video-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Arróspide, Jon; Camplani, Massimo

    2012-01-01

    In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion.

  2. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  3. Introduction to SNPP/VIIRS Flood Mapping Software Version 1.0

    NASA Astrophysics Data System (ADS)

    Li, S.; Sun, D.; Goldberg, M.; Sjoberg, W.; Santek, D.; Hoffman, J.

    2017-12-01

    Near real-time satellite-derived flood maps are invaluable to river forecasters and decision-makers for disaster monitoring and relief efforts. With support from the JPSS (Joint Polar Satellite System) Proving Ground and Risk Reduction (PGRR) Program, flood detection software has been developed using Suomi-NPP/VIIRS (Suomi National Polar-orbiting Partnership/Visible Infrared Imaging Radiometer Suite) imagery to automatically generate near real-time flood maps for National Weather Service (NWS) River Forecast Centers (RFC) in the USA. The software, which is called VIIRS NOAA GMU Flood Version 1.0 (hereafter referred to as VNG Flood V1.0), consists of a series of algorithms that include water detection, cloud shadow removal, terrain shadow removal, minor flood detection, water fraction retrieval, and floodwater determination. The software is designed for flood detection in any land region between 80°S and 80°N, and it has been running routinely with direct broadcast SNPP/VIIRS data at the Space Science and Engineering Center at the University of Wisconsin-Madison (UW/SSEC) and the Geographic Information Network of Alaska at the University of Alaska-Fairbanks (UAF/GINA) since 2014. Near real-time flood maps are distributed via the Unidata Local Data Manager (LDM), reviewed by river forecasters in AWIPS-II (the second generation of the Advanced Weather Interactive Processing System) and applied in flood operations. Initial feedback from operational forecasters on the product accuracy and performance has been largely positive. The software capability has also been extended to areas outside of the USA via a case-driven mode to detect major floods all over the world. Offline validation efforts include the visual inspection of over 10,000 VIIRS false-color composite images, an inter-comparison with MODIS automatic flood products and a quantitative evaluation using Landsat imagery. The steady performance from the 3-year routine process and the promising validation results indicate that VNG Flood V1.0 has a high feasibility for flood detection at the product level.

  4. Automatic detection of a hand-held needle in ultrasound via phased-based analysis of the tremor motion

    NASA Astrophysics Data System (ADS)

    Beigi, Parmida; Salcudean, Septimiu E.; Rohling, Robert; Ng, Gary C.

    2016-03-01

    This paper presents an automatic localization method for a standard hand-held needle in ultrasound based on temporal motion analysis of spatially decomposed data. Subtle displacement arising from tremor motion has a periodic pattern which is usually imperceptible in the intensity image but may convey information in the phase image. Our method aims to detect such periodic motion of a hand-held needle and distinguish it from intrinsic tissue motion, using a technique inspired by video magnification. Complex steerable pyramids allow specific design of the wavelets' orientations according to the insertion angle as well as the measurement of the local phase. We therefore use steerable pairs of even and odd Gabor wavelets to decompose the ultrasound B-mode sequence into various spatial frequency bands. Variations of the local phase measurements in the spatially decomposed input data is then temporally analyzed using a finite impulse response bandpass filter to detect regions with a tremor motion pattern. Results obtained from different pyramid levels are then combined and thresholded to generate the binary mask input for the Hough transform, which determines an estimate of the direction angle and discards some of the outliers. Polynomial fitting is used at the final stage to remove any remaining outliers and improve the trajectory detection. The detected needle is finally added back to the input sequence as an overlay of a cloud of points. We demonstrate the efficiency of our approach to detect the needle using subtle tremor motion in an agar phantom and in-vivo porcine cases where intrinsic motion is also present. The localization accuracy was calculated by comparing to expert manual segmentation, and presented in (mean, standard deviation and root-mean-square error) of (0.93°, 1.26° and 0.87°) and (1.53 mm, 1.02 mm and 1.82 mm) for the trajectory and the tip, respectively.

  5. Algorithm for Automated Detection of Edges of Clouds

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2006-01-01

    An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.

  6. Improvements to GOES Twilight Cloud Detection over the ARM SGP

    NASA Technical Reports Server (NTRS)

    Yost, c. R.; Trepte, Q.; Khaiyer, M. M.; Palikonda, R.; Nguyen, L.

    2007-01-01

    The current ARM satellite cloud products derived from Geostationary Operational Environmental Satellite (GOES) data provide continuous coverage of many cloud properties over the ARM Southern Great Plains domain. However, discontinuities occur during daylight near the terminator, a time period referred to here as twilight. This poster presentation will demonstrate the improvements in cloud detection provided by the improved cloud mask algorithm as well as validation of retrieved cloud properties using surface observations from the Atmospheric Radiation Measurement Southern Great Plains (ARM SGP) site.

  7. Point Clouds to Indoor/outdoor Accessibility Diagnosis

    NASA Astrophysics Data System (ADS)

    Balado, J.; Díaz-Vilariño, L.; Arias, P.; Garrido, I.

    2017-09-01

    This work presents an approach to automatically detect structural floor elements such as steps or ramps in the immediate environment of buildings, elements that may affect the accessibility to buildings. The methodology is based on Mobile Laser Scanner (MLS) point cloud and trajectory information. First, the street is segmented in stretches along the trajectory of the MLS to work in regular spaces. Next, the lower region of each stretch (the ground zone) is selected as the ROI and normal, curvature and tilt are calculated for each point. With this information, points in the ROI are classified in horizontal, inclined or vertical. Points are refined and grouped in structural elements using raster process and connected components in different phases for each type of previously classified points. At last, the trajectory data is used to distinguish between road and sidewalks. Adjacency information is used to classify structural elements in steps, ramps, curbs and curb-ramps. The methodology is tested in a real case study, consisting of 100 m of an urban street. Ground elements are correctly classified in an acceptable computation time. Steps and ramps also are exported to GIS software to enrich building models from Open Street Map with information about accessible/inaccessible entrances and their locations.

  8. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  9. Mapping Snow Grain Size over Greenland from MODIS

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Tedesco, Marco; Wang, Yujie; Kokhanovsky, Alexander

    2008-01-01

    This paper presents a new automatic algorithm to derive optical snow grain size (SGS) at 1 km resolution using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements. Differently from previous approaches, snow grains are not assumed to be spherical but a fractal approach is used to account for their irregular shape. The retrieval is conceptually based on an analytical asymptotic radiative transfer model which predicts spectral bidirectional snow reflectance as a function of the grain size and ice absorption. The analytical form of solution leads to an explicit and fast retrieval algorithm. The time series analysis of derived SGS shows a good sensitivity to snow metamorphism, including melting and snow precipitation events. Preprocessing is performed by a Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, which includes gridding MODIS data to 1 km resolution, water vapor retrieval, cloud masking and an atmospheric correction. MAIAC cloud mask (CM) is a new algorithm based on a time series of gridded MODIS measurements and an image-based rather than pixel-based processing. Extensive processing of MODIS TERRA data over Greenland shows a robust performance of CM algorithm in discrimination of clouds over bright snow and ice. As part of the validation analysis, SGS derived from MODIS over selected sites in 2004 was compared to the microwave brightness temperature measurements of SSM\\I radiometer, which is sensitive to the amount of liquid water in the snowpack. The comparison showed a good qualitative agreement, with both datasets detecting two main periods of snowmelt. Additionally, MODIS SGS was compared with predictions of the snow model CROCUS driven by measurements of the automatic whether stations of the Greenland Climate Network. We found that CROCUS grain size is on average a factor of two larger than MODIS-derived SGS. Overall, the agreement between CROCUS and MODIS results was satisfactory, in particular before and during the first melting period in mid-June. Following detailed time series analysis of SGS for four permanent sites, the paper presents SGS maps over the Greenland ice sheet for the March-September period of 2004.

  10. Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.

    2010-01-01

    The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with ocean, particularly in the tropics and summer hemisphere.

  11. Improved Thin Cirrus and Terminator Cloud Detection in CERES Cloud Mask

    NASA Technical Reports Server (NTRS)

    Trepte, Qing; Minnis, Patrick; Palikonda, Rabindra; Spangenberg, Doug; Haeffelin, Martial

    2006-01-01

    Thin cirrus clouds account for about 20-30% of the total cloud coverage and affect the global radiation budget by increasing the Earth's albedo and reducing infrared emissions. Thin cirrus, however, are often underestimated by traditional satellite cloud detection algorithms. This difficulty is caused by the lack of spectral contrast between optically thin cirrus and the surface in techniques that use visible (0.65 micron ) and infrared (11 micron ) channels. In the Clouds and the Earth s Radiant Energy System (CERES) Aqua Edition 1 (AEd1) and Terra Edition 3 (TEd3) Cloud Masks, thin cirrus detection is significantly improved over both land and ocean using a technique that combines MODIS high-resolution measurements from the 1.38 and 11 micron channels and brightness temperature differences (BTDs) of 11-12, 8.5-11, and 3.7-11 micron channels. To account for humidity and view angle dependencies, empirical relationships were derived with observations from the 1.38 micron reflectance and the 11-12 and 8.5-11 micron BTDs using 70 granules of MODIS data in 2002 and 2003. Another challenge in global cloud detection algorithms occurs near the day/night terminator where information from the visible 0.65 micron channel and the estimated solar component of 3.7 micron channel becomes less reliable. As a result, clouds are often underestimated or misidentified near the terminator over land and ocean. Comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer [AVHRR]) cloud coverage and Geoscience Laser Altimeter System (GLAS) measurements north of 60 N indicate significant amounts of missing clouds from CLAVR-x because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products (MOD06) and GLAS in the same region also show similar difficulties with MODIS cloud retrievals. The consistent detection of clouds through out the day is needed to provide reliable cloud and radiation products for CERES and other research efforts involving the modeling of clouds and their interaction with the radiation budget.

  12. Emergency navigation without an infrastructure.

    PubMed

    Gelenbe, Erol; Bi, Huibo

    2014-08-18

    Emergency navigation systems for buildings and other built environments, such as sport arenas or shopping centres, typically rely on simple sensor networks to detect emergencies and, then, provide automatic signs to direct the evacuees. The major drawbacks of such static wireless sensor network (WSN)-based emergency navigation systems are the very limited computing capacity, which makes adaptivity very difficult, and the restricted battery power, due to the low cost of sensor nodes for unattended operation. If static wireless sensor networks and cloud-computing can be integrated, then intensive computations that are needed to determine optimal evacuation routes in the presence of time-varying hazards can be offloaded to the cloud, but the disadvantages of limited battery life-time at the client side, as well as the high likelihood of system malfunction during an emergency still remain. By making use of the powerful sensing ability of smart phones, which are increasingly ubiquitous, this paper presents a cloud-enabled indoor emergency navigation framework to direct evacuees in a coordinated fashion and to improve the reliability and resilience for both communication and localization. By combining social potential fields (SPF) and a cognitive packet network (CPN)-based algorithm, evacuees are guided to exits in dynamic loose clusters. Rather than relying on a conventional telecommunications infrastructure, we suggest an ad hoc cognitive packet network (AHCPN)-based protocol to adaptively search optimal communication routes between portable devices and the network egress nodes that provide access to cloud servers, in a manner that spares the remaining battery power of smart phones and minimizes the time latency. Experimental results through detailed simulations indicate that smart human motion and smart network management can increase the survival rate of evacuees and reduce the number of drained smart phones in an evacuation process.

  13. Emergency Navigation without an Infrastructure

    PubMed Central

    Gelenbe, Erol; Bi, Huibo

    2014-01-01

    Emergency navigation systems for buildings and other built environments, such as sport arenas or shopping centres, typically rely on simple sensor networks to detect emergencies and, then, provide automatic signs to direct the evacuees. The major drawbacks of such static wireless sensor network (WSN)-based emergency navigation systems are the very limited computing capacity, which makes adaptivity very difficult, and the restricted battery power, due to the low cost of sensor nodes for unattended operation. If static wireless sensor networks and cloud-computing can be integrated, then intensive computations that are needed to determine optimal evacuation routes in the presence of time-varying hazards can be offloaded to the cloud, but the disadvantages of limited battery life-time at the client side, as well as the high likelihood of system malfunction during an emergency still remain. By making use of the powerful sensing ability of smart phones, which are increasingly ubiquitous, this paper presents a cloud-enabled indoor emergency navigation framework to direct evacuees in a coordinated fashion and to improve the reliability and resilience for both communication and localization. By combining social potential fields (SPF) and a cognitive packet network (CPN)-based algorithm, evacuees are guided to exits in dynamic loose clusters. Rather than relying on a conventional telecommunications infrastructure, we suggest an ad hoc cognitive packet network (AHCPN)-based protocol to adaptively search optimal communication routes between portable devices and the network egress nodes that provide access to cloud servers, in a manner that spares the remaining battery power of smart phones and minimizes the time latency. Experimental results through detailed simulations indicate that smart human motion and smart network management can increase the survival rate of evacuees and reduce the number of drained smart phones in an evacuation process. PMID:25196014

  14. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overheadmore » is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.« less

  16. Validation of Satellite-Based Objective Overshooting Cloud-Top Detection Methods Using CloudSat Cloud Profiling Radar Observations

    NASA Technical Reports Server (NTRS)

    Bedka, Kristopher M.; Dworak, Richard; Brunner, Jason; Feltz, Wayne

    2012-01-01

    Two satellite infrared-based overshooting convective cloud-top (OT) detection methods have recently been described in the literature: 1) the 11-mm infrared window channel texture (IRW texture) method, which uses IRW channel brightness temperature (BT) spatial gradients and thresholds, and 2) the water vapor minus IRW BT difference (WV-IRW BTD). While both methods show good performance in published case study examples, it is important to quantitatively validate these methods relative to overshooting top events across the globe. Unfortunately, no overshooting top database currently exists that could be used in such study. This study examines National Aeronautics and Space Administration CloudSat Cloud Profiling Radar data to develop an OT detection validation database that is used to evaluate the IRW-texture and WV-IRW BTD OT detection methods. CloudSat data were manually examined over a 1.5-yr period to identify cases in which the cloud top penetrates above the tropopause height defined by a numerical weather prediction model and the surrounding cirrus anvil cloud top, producing 111 confirmed overshooting top events. When applied to Moderate Resolution Imaging Spectroradiometer (MODIS)-based Geostationary Operational Environmental Satellite-R Series (GOES-R) Advanced Baseline Imager proxy data, the IRW-texture (WV-IRW BTD) method offered a 76% (96%) probability of OT detection (POD) and 16% (81%) false-alarm ratio. Case study examples show that WV-IRW BTD.0 K identifies much of the deep convective cloud top, while the IRW-texture method focuses only on regions with a spatial scale near that of commonly observed OTs. The POD decreases by 20% when IRW-texture is applied to current geostationary imager data, highlighting the importance of imager spatial resolution for observing and detecting OT regions.

  17. Representation and Reconconstruction of Triangular Irregular Networks with Vertical Walls

    NASA Astrophysics Data System (ADS)

    Gorte, B.; Lesparre, J.

    2012-06-01

    Point clouds obtained by aerial laser scanning are a convenient input source for high resolution 2.5d elevation models, such as the Dutch AHN-2. More challenging is the fully automatic reconstruction of 3d city models. An actual demand for a combined 2.5d terrain and 3d city model for an urban hydrology application led to the design of an extension to the well-known Delaunay triangulated irregular networks (TINs) as to accommodate vertical walls. In addition we introduce methods to generate and refine models adhering to our data structure. These are based on combining two approaches: a representation of the TIN using stars of vertices and triangles, together with segmenting the TIN on the basis of coplanarity of adjacent triangles. The approach is supposed to deliver the complete model including walls at the correct locations, without relying on additional map data, as these often lack completeness, actuality and accuracy, and moreover most of the time do not account for parts facades not going down to street level. However, automatic detection of height discontinuities to obtain the exact location of the walls is currently still under implementation.

  18. Application of Mls Data to the Assessment of Safety-Related Features in the Surrounding Area of Automatically Detected Pedestrian Crossings

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Sánchez-Rodríguez, A.; González-deSantos, L. M.

    2018-05-01

    During the last few years, there has been a huge methodological development regarding the automatic processing of 3D point cloud data acquired by both terrestrial and aerial mobile mapping systems, motivated by the improvement of surveying technologies and hardware performance. This paper presents a methodology that, in a first place, extracts geometric and semantic information regarding the road markings within the surveyed area from Mobile Laser Scanning (MLS) data, and then employs it to isolate street areas where pedestrian crossings are found and, therefore, pedestrians are more likely to cross the road. Then, different safety-related features can be extracted in order to offer information about the adequacy of the pedestrian crossing regarding its safety, which can be displayed in a Geographical Information System (GIS) layer. These features are defined in four different processing modules: Accessibility analysis, traffic lights classification, traffic signs classification, and visibility analysis. The validation of the proposed methodology has been carried out in two different cities in the northwest of Spain, obtaining both quantitative and qualitative results for pedestrian crossing classification and for each processing module of the safety assessment on pedestrian crossing environments.

  19. Automated, per pixel Cloud Detection from High-Resolution VNIR Data

    NASA Technical Reports Server (NTRS)

    Varlyguin, Dmitry L.

    2007-01-01

    CASA is a fully automated software program for the per-pixel detection of clouds and cloud shadows from medium- (e.g., Landsat, SPOT, AWiFS) and high- (e.g., IKONOS, QuickBird, OrbView) resolution imagery without the use of thermal data. CASA is an object-based feature extraction program which utilizes a complex combination of spectral, spatial, and contextual information available in the imagery and the hierarchical self-learning logic for accurate detection of clouds and their shadows.

  20. Cloud-based MOTIFSIM: Detecting Similarity in Large DNA Motif Data Sets.

    PubMed

    Tran, Ngoc Tam L; Huang, Chun-Hsi

    2017-05-01

    We developed the cloud-based MOTIFSIM on Amazon Web Services (AWS) cloud. The tool is an extended version from our web-based tool version 2.0, which was developed based on a novel algorithm for detecting similarity in multiple DNA motif data sets. This cloud-based version further allows researchers to exploit the computing resources available from AWS to detect similarity in multiple large-scale DNA motif data sets resulting from the next-generation sequencing technology. The tool is highly scalable with expandable AWS.

  1. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  2. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    PubMed Central

    Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert

    2008-01-01

    Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements directly, i.e. the acquired points. Gridding of the data is not necessary, a process which is inherently coupled to loss of data and precision. The 3D properties provide especially a good separability of buildings and terrain points respectively, if they are occluded by vegetation. PMID:27873771

  3. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  4. Design and implementation of a cloud based lithography illumination pupil processing application

    NASA Astrophysics Data System (ADS)

    Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie

    2017-02-01

    Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.

  5. Invisible Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team

  6. Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery

    NASA Astrophysics Data System (ADS)

    Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.

    2009-05-01

    In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.

  7. Tiny, Dusty, Galactic HI Clouds: The GALFA-HI Compact Cloud Catalog

    NASA Astrophysics Data System (ADS)

    Saul, Destry R.; Putman, M. E.; Peek, J. G.

    2013-01-01

    The recently published GALFA-HI Compact Cloud Catalog contains 2000 nearby neutral hydrogen clouds under 20' in angular size detected with a machine-vision algorithm in the Galactic Arecibo L-Band Feed Array HI survey (GALFA-HI). At a distance of 1kpc, the compact clouds would typically be 1 solar mass and 1pc in size. We observe that nearly all of the compact clouds that are classified as high velocity (> 90 km/s) are near previously-identified high velocity complexes. We separate the compact clouds into populations based on velocity, linewidth, and position. We have begun to search for evidence of dust in these clouds using IRIS and have detections in several populations.

  8. DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites

    NASA Astrophysics Data System (ADS)

    Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.

    2017-12-01

    Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.

  9. Optical Algorithm for Cloud Shadow Detection Over Water

    DTIC Science & Technology

    2013-02-01

    REPORT DATE (DD-MM-YYYY) 05-02-2013 2. REPORT TYPE Journal Article 3. DATES COVERED (From ■ To) 4. TITLE AND SUBTITLE Optical Algorithm for Cloud...particularly over humid tropical regions. Throughout the year, about two-thirds of the Earth’s surface is always covered by clouds [1]. The problem...V. Khlopenkov and A. P. Trishchenko, "SPARC: New cloud, snow , cloud shadow detection scheme for historical I-km AVHHR data over Canada," / Atmos

  10. Improvements to the CERES Cloud Detection Algorithm using Himawari 8 Data and Validation using CALIPSO and CATS Lidar Observations

    NASA Astrophysics Data System (ADS)

    Trepte, Q.; Minnis, P.; Palikonda, R.; Yost, C. R.; Rodier, S. D.; Trepte, C. R.; McGill, M. J.

    2016-12-01

    Geostationary satellites provide continuous cloud and meteorological observations important for weather forecasting and for understanding climate processes. The Himawari-8 satellite represents a new generation of measurement capabilities with significantly improved resolution and enhanced spectral information. The satellite was launched in October 2014 by the Japanese Meteorological Agency and is centered at 140° E to provide coverage over eastern Asia and the western Pacific region. A cloud detection algorithm was developed as part of the CERES Cloud Mask algorithm using the Advanced Himawari Imager (AHI), a 16 channel multi-spectral imager. The algorithm was originally designed for use with Meteosat Second Generation (MSG) data and has been adapted for Himawari-8 AHI measurements. This paper will describe the improvements in the Himawari cloud mask including daytime ocean low cloud and aerosol discrimination, nighttime thin cirrus detection, and Australian desert and coastal cloud detection. The statistics from matched CERES Himawari cloud mask results with CALIPSO lidar data and with new observations from the CATS lidar will also be presented. A feature of the CATS instrument on board the International Space Station is that it gives information at different solar viewing times to examine the diurnal variation of clouds and this provides an ability to evaluate the performance of the cloud mask for different sun angles.

  11. Observational Study and Parameterization of Aerosol-fog Interactions

    NASA Astrophysics Data System (ADS)

    Duan, J.; Guo, X.; Liu, Y.; Fang, C.; Su, Z.; Chen, Y.

    2014-12-01

    Studies have shown that human activities such as increased aerosols affect fog occurrence and properties significantly, and accurate numerical fog forecasting depends on, to a large extent, parameterization of fog microphysics and aerosol-fog interactions. Furthermore, fogs can be considered as clouds near the ground, and enjoy an advantage of permitting comprehensive long-term in-situ measurements that clouds do not. Knowledge learned from studying aerosol-fog interactions will provide useful insights into aerosol-cloud interactions. To serve the twofold objectives of understanding and improving parameterizations of aerosol-fog interactions and aerosol-cloud interactions, this study examines the data collected from fogs, with a focus but not limited to the data collected in Beijing, China. Data examined include aerosol particle size distributions measured by a Passive Cavity Aerosol Spectrometer Probe (PCASP-100X), fog droplet size distributions measured by a Fog Monitor (FM-120), Cloud Condensation Nuclei (CCN), liquid water path measured by radiometers and visibility sensors, along with meteorological variables measured by a Tethered Balloon Sounding System (XLS-Ⅱ) and Automatic Weather Station (AWS). The results will be compared with low-level clouds for similarities and differences between fogs and clouds.

  12. A holistic image segmentation framework for cloud detection and extraction

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe

    2013-05-01

    Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.

  13. Automatic detection of confusion in elderly users of a web-based health instruction video.

    PubMed

    Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek

    2015-06-01

    Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.

  14. Automatic Thickness and Volume Estimation of Sprayed Concrete on Anchored Retaining Walls from Terrestrial LIDAR Data

    NASA Astrophysics Data System (ADS)

    Martínez-Sánchez, J.; Puente, I.; GonzálezJorge, H.; Riveiro, B.; Arias, P.

    2016-06-01

    When ground conditions are weak, particularly in free formed tunnel linings or retaining walls, sprayed concrete can be applied on the exposed surfaces immediately after excavation for shotcreting rock outcrops. In these situations, shotcrete is normally applied conjointly with rock bolts and mesh, thereby supporting the loose material that causes many of the small ground falls. On the other hand, contractors want to determine the thickness and volume of sprayed concrete for both technical and economic reasons: to guarantee their structural strength but also, to not deliver excess material that they will not be paid for. In this paper, we first introduce a terrestrial LiDAR-based method for the automatic detection of rock bolts, as typically used in anchored retaining walls. These ground support elements are segmented based on their geometry and they will serve as control points for the co-registration of two successive scans, before and after shotcreting. Then we compare both point clouds to estimate the sprayed concrete thickness and the expending volume on the wall. This novel methodology is demonstrated on repeated scan data from a retaining wall in the city of Vigo (Spain), resulting in a rock bolts detection rate of 91%, that permits to obtain a detailed information of the thickness and calculate a total volume of 3597 litres of concrete. These results have verified the effectiveness of the developed approach by increasing productivity and improving previous empirical proposals for real time thickness estimation.

  15. Following the south polar cap recession as viewed by OMEGA/MEX using automatic detection of H2O and CO2 ices.

    NASA Astrophysics Data System (ADS)

    Schmidt, F.; Doute, S.; Schmitt, B.

    In order to understand Mars' current climate it is necessary to detect, characterize and monitor CO2 and H2O at the surface (permanent and seasonal icy deposits) and in the atmosphere (vapor and clouds). Here we will focus on the South Seasonal Polar Cap (SSPC) whose recession was previously observed with different techniques : from earth in the visible range with HST [James 1996], or from MGS spacecraft with MOC images [Benson 2005], in the thermal IR range by the TES [Kieffer 2000], in the near infrared by OMEGA/MEX [Langevin submitted]. The time and space evolutions of the SSPC is a major annual climatic signal both at the global and the regional scales. In particular the measurement of the temporal and spatial distributions of CO2 constrains exchange processes between both surface and atmosphere. This exchange may involve preponderant species : H2O, CO2 and dust. In this work we will apply a new detection technique : "wavanglet" in order to follow the recession of the SSPC thanks to OMEGA/MEX observations. This method was especially developed in the goal to classify a huge dataset, such OMEGA ones. We propose to use "wavanglet" as a supervised automatic classification method that identifies spectral features and classifies the image in spectrally homogeneous units. Additionally we will evaluate quantitative detection limits of "wavanglet" based on synthetic dataset simulating OMEGA spectra in typical situation of the SSPC. This detection limit will be discussed in terms of abundance for H2O and CO2 ices in order to improve the interpretation of the classification. Finally we will present the recession of the SSPC using "wavanglet" and we will compare the results with those of earlier investigation. An interpretation of the similarities and disagreements between those maps will be done.

  16. An Automatic Procedure for Combining Digital Images and Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Moussa, W.; Abdel-Wahab, M.; Fritsch, D.

    2012-07-01

    Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.

  17. Lidar-based individual tree species classification using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  18. Analysis of cloud top height and cloud coverage from satellites using the O2 A and B bands

    NASA Technical Reports Server (NTRS)

    Kuze, Akihiko; Chance, Kelly V.

    1994-01-01

    Cloud height and cloud coverage detection are important for total ozone retrieval using ultraviolet and visible scattered light. Use of the O2 A and B bands, around 761 and 687 nm, by a satellite-borne instrument of moderately high spectral resolution viewing in the nadir makes it possible to detect cloud top height and related parameters, including fractional coverage. The measured values of a satellite-borne spectrometer are convolutions of the instrument slit function and the atmospheric transmittance between cloud top and satellite. Studies here determine the optical depth between a satellite orbit and the Earth or cloud top height to high accuracy using FASCODE 3. Cloud top height and a cloud coverage parameter are determined by least squares fitting to calculated radiance ratios in the oxygen bands. A grid search method is used to search the parameter space of cloud top height and the coverage parameter to minimize an appropriate sum of squares of deviations. For this search, nonlinearity of the atmospheric transmittance (i.e., leverage based on varying amounts of saturation in the absorption spectrum) is important for distinguishing between cloud top height and fractional coverage. Using the above-mentioned method, an operational cloud detection algorithm which uses minimal computation time can be implemented.

  19. The Cloud Detection and UV Monitoring Experiment (CLUE)

    NASA Technical Reports Server (NTRS)

    Barbier, L.; Loh, E.; Sokolsky, P.; Streitmatter, R.

    2004-01-01

    We propose a large-area, low-power instrument to perform CLoud detection and Ultraviolet monitoring, CLUE. CLUE will combine the W detection capabilities of the NIGHTGLOW payload, with an array of infrared sensors to perform cloud slicing measurements. Missions such as EUSO and OWL which seek to measure UHE cosmic-rays at 1W20 eV use the atmosphere as a fluorescence detector. CLUE will provide several important correlated measurements for these missions, including: monitoring the atmospheric W emissions &om 330 - 400 nm, determining the ambient cloud cover during those W measurements (with active LIDAR), measuring the optical depth of the clouds (with an array of narrow band-pass IR sensors), and correlating LIDAR and IR cloud cover measurements. This talk will describe the instrument as we envision it.

  20. Detection of hydrogen sulfide above the clouds in Uranus's atmosphere

    NASA Astrophysics Data System (ADS)

    Irwin, Patrick G. J.; Toledo, Daniel; Garland, Ryan; Teanby, Nicholas A.; Fletcher, Leigh N.; Orton, Glenn A.; Bézard, Bruno

    2018-04-01

    Visible-to-near-infrared observations indicate that the cloud top of the main cloud deck on Uranus lies at a pressure level of between 1.2 bar and 3 bar. However, its composition has never been unambiguously identified, although it is widely assumed to be composed primarily of either ammonia or hydrogen sulfide (H2S) ice. Here, we present evidence of a clear detection of gaseous H2S above this cloud deck in the wavelength region 1.57-1.59 μm with a mole fraction of 0.4-0.8 ppm at the cloud top. Its detection constrains the deep bulk sulfur/nitrogen abundance to exceed unity (>4.4-5.0 times the solar value) in Uranus's bulk atmosphere, and places a lower limit on the mole fraction of H2S below the observed cloud of (1.0 -2.5 ) ×1 0-5. The detection of gaseous H2S at these pressure levels adds to the weight of evidence that the principal constituent of 1.2-3-bar cloud is likely to be H2S ice.

  1. Detection of hydrogen sulfide above the clouds in Uranus's atmosphere

    NASA Astrophysics Data System (ADS)

    Irwin, Patrick G. J.; Toledo, Daniel; Garland, Ryan; Teanby, Nicholas A.; Fletcher, Leigh N.; Orton, Glenn A.; Bézard, Bruno

    2018-05-01

    Visible-to-near-infrared observations indicate that the cloud top of the main cloud deck on Uranus lies at a pressure level of between 1.2 bar and 3 bar. However, its composition has never been unambiguously identified, although it is widely assumed to be composed primarily of either ammonia or hydrogen sulfide (H2S) ice. Here, we present evidence of a clear detection of gaseous H2S above this cloud deck in the wavelength region 1.57-1.59 μm with a mole fraction of 0.4-0.8 ppm at the cloud top. Its detection constrains the deep bulk sulfur/nitrogen abundance to exceed unity (>4.4-5.0 times the solar value) in Uranus's bulk atmosphere, and places a lower limit on the mole fraction of H2S below the observed cloud of (1.0 -2.5 ) ×1 0-5. The detection of gaseous H2S at these pressure levels adds to the weight of evidence that the principal constituent of 1.2-3-bar cloud is likely to be H2S ice.

  2. Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data

    PubMed Central

    Qin, Xinyan; Wu, Gongping; Fan, Fei

    2018-01-01

    Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from “layer” to “block” according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection. PMID:29690560

  3. Joint Simultaneous Reconstruction of Regularized Building Superstructures from Low-Density LIDAR Data Using Icp

    NASA Astrophysics Data System (ADS)

    Wichmann, Andreas; Kada, Martin

    2016-06-01

    There are many applications for 3D city models, e.g., in visualizations, analysis, and simulations; each one requiring a certain level of detail to be effective. The overall trend goes towards including various kinds of anthropogenic and natural objects therein with ever increasing geometric and semantic details. A few years back, the featured 3D building models had only coarse roof geometry. But nowadays, they are expected to include detailed roof superstructures like dormers and chimneys. Several methods have been proposed for the automatic reconstruction of 3D building models from airborne based point clouds. However, they are usually unable to reliably recognize and reconstruct small roof superstructures as these objects are often represented by only few point measurements, especially in low-density point clouds. In this paper, we propose a recognition and reconstruction approach that overcomes this problem by identifying and simultaneously reconstructing regularized superstructures of similar shape. For this purpose, candidate areas for superstructures are detected by taking into account virtual sub-surface points that are assumed to lie on the main roof faces below the measured points. The areas with similar superstructures are detected, extracted, grouped together, and registered to one another with the Iterative Closest Point (ICP) algorithm. As an outcome, the joint point density of each detected group is increased, which helps to recognize the shape of the superstructure more reliably and in more detail. Finally, all instances of each group of superstructures are modeled at once and transformed back to their original position. Because superstructures are reconstructed in groups, symmetries, alignments, and regularities can be enforced in a straight-forward way. The validity of the approach is presented on a number of example buildings from the Vaihingen test data set.

  4. Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data.

    PubMed

    Qin, Xinyan; Wu, Gongping; Lei, Jin; Fan, Fei; Ye, Xuhui

    2018-04-22

    Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from "layer" to "block" according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection.

  5. AstroCloud: An Agile platform for data visualization and specific analyzes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Molina, F. Z.; Salgado, R.; Bergel, A.; Infante, A.

    2017-07-01

    Nowadays, astronomers commonly run their own tools, or distributed computational packages, for data analysis and then visualizing the results with generic applications. This chain of processes comes at high cost: (a) analyses are manually applied, they are therefore difficult to be automatized, and (b) data have to be serialized, thus increasing the cost of parsing and saving intermediary data. We are developing AstroCloud, an agile visualization multipurpose platform intended for specific analyses of astronomical images (https://astrocloudy.wordpress.com). This platform incorporates domain-specific languages which make it easily extensible. AstroCloud supports customized plug-ins, which translate into time reduction on data analysis. Moreover, it also supports 2D and 3D rendering, including interactive features in real time. AstroCloud is under development, we are currently implementing different choices for data reduction and physical analyzes.

  6. What will the future of cloud-based astronomical data processing look like?

    NASA Astrophysics Data System (ADS)

    Green, Andrew W.; Mannering, Elizabeth; Harischandra, Lloyd; Vuong, Minh; O'Toole, Simon; Sealey, Katrina; Hopkins, Andrew M.

    2017-06-01

    Astronomy is rapidly approaching an impasse: very large datasets require remote or cloud-based parallel processing, yet many astronomers still try to download the data and develop serial code locally. Astronomers understand the need for change, but the hurdles remain high. We are developing a data archive designed from the ground up to simplify and encourage cloud-based parallel processing. While the volume of data we host remains modest by some standards, it is still large enough that download and processing times are measured in days and even weeks. We plan to implement a python based, notebook-like interface that automatically parallelises execution. Our goal is to provide an interface sufficiently familiar and user-friendly that it encourages the astronomer to run their analysis on our system in the cloud-astroinformatics as a service. We describe how our system addresses the approaching impasse in astronomy using the SAMI Galaxy Survey as an example.

  7. Genotyping in the cloud with Crossbow.

    PubMed

    Gurtowski, James; Schatz, Michael C; Langmead, Ben

    2012-09-01

    Crossbow is a scalable, portable, and automatic cloud computing tool for identifying SNPs from high-coverage, short-read resequencing data. It is built on Apache Hadoop, an implementation of the MapReduce software framework. Hadoop allows Crossbow to distribute read alignment and SNP calling subtasks over a cluster of commodity computers. Two robust tools, Bowtie and SOAPsnp, implement the fundamental alignment and variant calling operations respectively, and have demonstrated capabilities within Crossbow of analyzing approximately one billion short reads per hour on a commodity Hadoop cluster with 320 cores. Through protocol examples, this unit will demonstrate the use of Crossbow for identifying variations in three different operating modes: on a Hadoop cluster, on a single computer, and on the Amazon Elastic MapReduce cloud computing service.

  8. Cooperative scattering and radiation pressure force in dense atomic clouds

    NASA Astrophysics Data System (ADS)

    Bachelard, R.; Piovella, N.; Courteille, Ph. W.

    2011-07-01

    Atomic clouds prepared in “timed Dicke” states, i.e. states where the phase of the oscillating atomic dipole moments linearly varies along one direction of space, are efficient sources of superradiant light emission [Scully , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.96.010501 96, 010501 (2006)]. Here, we show that, in contrast to previous assertions, timed Dicke states are not the states automatically generated by incident laser light. In reality, the atoms act back on the driving field because of the finite refraction of the cloud. This leads to nonuniform phase shifts, which, at higher optical densities, dramatically alter the cooperative scattering properties, as we show by explicit calculation of macroscopic observables, such as the radiation pressure force.

  9. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-04-01

    The cloud processing scheme APOLLO (Avhrr Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. While building upon the physical principles having served well in the original APOLLO a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is not performed as a binary yes/no decision based on these physical principals but is expressed as cloud probability for each satellite pixel. Consequently the outcome of the algorithm can be tuned from clear confident to cloud confident depending on the purpose. The probabilistic approach allows to retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for the application with large amounts of historical satellite data. Thus the radiative transfer solution is approximated by the same two stream approach which also had been used for the original APOLLO. This allows the algorithm to be robust enough for being applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e. within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from on NOAA-18 are presented.

  10. Study and Application on Cloud Covered Rate for Agroclimatical Distribution Using In Guangxi Based on Modis Data

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Zhong, Shiquan; Sun, Han; Tan, Zongkun; Li, Zheng; Ding, Meihua

    Based on analyzing of the physical characteristics of cloud and importance of cloud in agricultural production and national economy, cloud is a very important climatic resources such as temperature, precipitation and solar radiation. Cloud plays a very important role in agricultural climate division .This paper analyzes methods of cloud detection based on MODIS data in China and Abroad . The results suggest that Quanjun He method is suitable to detect cloud in Guangxi. State chart of cloud cover in Guangxi is imaged by using Quanjun He method .We find out the approach of calculating cloud covered rate by using the frequency spectrum analysis. At last, the Guangxi is obtained. Taking Rongxian County Guangxi as an example, this article analyze the preliminary application of cloud covered rate in distribution of Rong Shaddock pomelo . Analysis results indicate that cloud covered rate is closely related to quality of Rong Shaddock pomelo.

  11. Evaluation and Applications of Cloud Climatologies from CALIOP

    NASA Technical Reports Server (NTRS)

    Winker, David; Getzewitch, Brian; Vaughan, Mark

    2008-01-01

    Clouds have a major impact on the Earth radiation budget and differences in the representation of clouds in global climate models are responsible for much of the spread in predicted climate sensitivity. Existing cloud climatologies, against which these models can be tested, have many limitations. The CALIOP lidar, carried on the CALIPSO satellite, has now acquired over two years of nearly continuous cloud and aerosol observations. This dataset provides an improved basis for the characterization of 3-D global cloudiness. Global average cloud cover measured by CALIOP is about 75%, significantly higher than for existing cloud climatologies due to the sensitivity of CALIOP to optically thin cloud. Day/night biases in cloud detection appear to be small. This presentation will discuss detection sensitivity and other issues associated with producing a cloud climatology, characteristics of cloud cover statistics derived from CALIOP data, and applications of those statistics.

  12. Introducing two Random Forest based methods for cloud detection in remote sensing images

    NASA Astrophysics Data System (ADS)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.

  13. Cloud-Based Smart Health Monitoring System for Automatic Cardiovascular and Fall Risk Assessment in Hypertensive Patients.

    PubMed

    Melillo, P; Orrico, A; Scala, P; Crispino, F; Pecchia, L

    2015-10-01

    The aim of this paper is to describe the design and the preliminary validation of a platform developed to collect and automatically analyze biomedical signals for risk assessment of vascular events and falls in hypertensive patients. This m-health platform, based on cloud computing, was designed to be flexible, extensible, and transparent, and to provide proactive remote monitoring via data-mining functionalities. A retrospective study was conducted to train and test the platform. The developed system was able to predict a future vascular event within the next 12 months with an accuracy rate of 84 % and to identify fallers with an accuracy rate of 72 %. In an ongoing prospective trial, almost all the recruited patients accepted favorably the system with a limited rate of inadherences causing data losses (<20 %). The developed platform supported clinical decision by processing tele-monitored data and providing quick and accurate risk assessment of vascular events and falls.

  14. Photogrammetric Analysis of Rotor Clouds Observed during T-REX

    NASA Astrophysics Data System (ADS)

    Romatschke, U.; Grubišić, V.

    2017-12-01

    Stereo photogrammetric analysis is a rarely utilized but highly valuable tool for studying smaller, highly ephemeral clouds. In this study, we make use of data that was collected during the Terrain-induced Rotor Experiment (T-REX), which took place in Owens Valley, eastern California, in the spring of 2006. The data set consists of matched digital stereo photographs obtained at high temporal (on the order of seconds) and spatial resolution (limited by the pixel size of the cameras). Using computer vision techniques we have been able to develop algorithms for camera calibration, automatic feature matching, and ultimately reconstruction of 3D cloud scenes. Applying these techniques to images from different T-REX IOPs we capture the motion of clouds in several distinct mountain wave scenarios ranging from short lived lee wave clouds on an otherwise clear sky day to rotor clouds formed in an extreme turbulence environment with strong winds and high cloud coverage. Tracking the clouds in 3D space and time allows us to quantify phenomena such as vertical and horizontal movement of clouds, turbulent motion at the upstream edge of rotor clouds, the structure of the lifting condensation level, extreme wind shear, and the life cycle of clouds in lee waves. When placed into context with the existing literature that originated from the T-REX field campaign, our results complement and expand our understanding of the complex dynamics observed in a variety of different lee wave settings.

  15. Detecting Distributed SQL Injection Attacks in a Eucalyptus Cloud Environment

    NASA Technical Reports Server (NTRS)

    Kebert, Alan; Barnejee, Bikramjit; Solano, Juan; Solano, Wanda

    2013-01-01

    The cloud computing environment offers malicious users the ability to spawn multiple instances of cloud nodes that are similar to virtual machines, except that they can have separate external IP addresses. In this paper we demonstrate how this ability can be exploited by an attacker to distribute his/her attack, in particular SQL injection attacks, in such a way that an intrusion detection system (IDS) could fail to identify this attack. To demonstrate this, we set up a small private cloud, established a vulnerable website in one instance, and placed an IDS within the cloud to monitor the network traffic. We found that an attacker could quite easily defeat the IDS by periodically altering its IP address. To detect such an attacker, we propose to use multi-agent plan recognition, where the multiple source IPs are considered as different agents who are mounting a collaborative attack. We show that such a formulation of this problem yields a more sophisticated approach to detecting SQL injection attacks within a cloud computing environment.

  16. Comparison of the MODIS Collection 5 Multilayer Cloud Detection Product with CALIPSO

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Gala; King, Michael D.; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.

    2010-01-01

    CALIPSO, launched in June 2006, provides global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the Collection 5 scream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 km resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, we investigate the global performance of multilayer cloud detection algorithms (and thermodynamic phase).

  17. Improving the Accuracy of Cloud Detection Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.

  18. Cloud2IR: Infrared thermography and environmental sensors integrated in an autonomoussystem for long term monitoring of structures

    NASA Astrophysics Data System (ADS)

    Crinière, Antoine; Dumoulin, Jean; Mevel, Laurent; Andrade-Barroso, Guillermo

    2016-04-01

    Since late 2014, the project Cloud2SM aims to develop a robust information system able to assess the long term monitoring of civil engineering structures as well as interfacing various sensors and data. Cloud2SM address three main goals, the management of distributed data and sensors network, the asynchronous processing of the data through network and the local management of the sensors themselves [1]. Integrated to this project Cloud2IR is an autonomous sensor system dedicated to the long term monitoring of infrastructures. Past experimentations have shown the need as well as usefulness of such system [2]. Before Cloud2IR an initially laboratory oriented system was used, which implied heavy operating system to be used [3]. Based on such system Cloud2IR has benefited of the experimental knowledge acquired to redefine a lighter architecture based on generics standards, more appropriated to autonomous operations on field and which can be later included in a wide distributed architecture such as Cloud2SM. The sensor system can be divided in two parts. The sensor side, this part is mainly composed by the various sensors drivers themselves as the infrared camera, the weather station or the pyranometers and their different fixed configurations. In our case, as infrared camera are slightly different than other kind of sensors, the system implement in addition an RTSP server which can be used to set up the FOV as well as other measurement parameter considerations. The second part can be seen as the data side, which is common to all sensors. It instantiate through a generic interface all the sensors and control the data access loop (not the requesting). This side of the system is weakly coupled (see data coupling) with the sensor side. It can be seen as a general framework able to aggregate any sensor data, type or size and automatically encapsulate them in various generic data format as HDF5 or cloud data as OGC SWE standard. This whole part is also responsible of the acquisition scenario the local storage management and the network management through SFTP or SOAP for the OGC frame. The data side only need an XML configuration file and if a configuration change occurs in time the system is automatically restarted with the new value. Cloud2IR has been deployed on field since several Monthat the SenseCity outdoor test bed in Marne La Vallée (France)[4]. The next step will be the full standardisation of the system and possibly the full separation between the sensor side and the data side which can be seen at term as an external framework. References: [1] A Crinière, J Dumoulin, L Mevel, G Andrade-Barosso, M Simonin. The Cloud2SM Project.European Geosciences Union General Assembly (EGU2015), Apr 2015, Vienne, Austria. 2015. [2] J Dumoulin, A Criniere, and R Averty. The detection and thermal characterization of the inner structure of the 'musmeci' bridge deck by infrared thermography monitoring. Journal Of Geophysics And Engineering doi:10.1088/1742-2132/10/6/064003, Vol 10, 2013. [3] J Dumoulin, R Averty. Development of an infrared system coupled with a weather station for real time atmospheric corrections using GPU computing: Application to bridge monitoring, in Proc of 11 th International Conference on Quantitative InfraRed Thermography, Naples Italy, 2012. [4] F Derkx, B Lebental, T Bourouina, Frédéric B, C Cojocaru, and al..The Sense-City project.XVIIIth Symposium on Vibrations, Shocks and Noise, Jul 2012, France. 9p, 2012.

  19. Automatic techniques for 3D reconstruction of critical workplace body postures from range imaging data

    NASA Astrophysics Data System (ADS)

    Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin

    2013-11-01

    The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.

  20. Semantic Labelling of Road Furniture in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2017-09-01

    Road furniture semantic labelling is vital for large scale mapping and autonomous driving systems. Much research has been investigated on road furniture interpretation in both 2D images and 3D point clouds. Precise interpretation of road furniture in mobile laser scanning data still remains unexplored. In this paper, a novel method is proposed to interpret road furniture based on their logical relations and functionalities. Our work represents the most detailed interpretation of road furniture in mobile laser scanning data. 93.3 % of poles are correctly extracted and all of them are correctly recognised. 94.3 % of street light heads are detected and 76.9 % of them are correctly identified. Despite errors arising from the recognition of other components, our framework provides a promising solution to automatically map road furniture at a detailed level in urban environments.

  1. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-10-01

    The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. It builds upon the physical principles that have served well in the original APOLLO scheme. Nevertheless, a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is no longer performed as a binary yes/no decision based on these physical principles. It is rather expressed as cloud probability for each satellite pixel. Consequently, the outcome of the algorithm can be tuned from being sure to reliably identify clear pixels to conditions of reliably identifying definitely cloudy pixels, depending on the purpose. The probabilistic approach allows retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for application to large amounts of historical satellite data. The radiative transfer solution is approximated by the same two-stream approach which also had been used for the original APOLLO. This allows the algorithm to be applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e., within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from NOAA-18 are presented.

  2. Detection of supercooled liquid water-topped mixed-phase clouds >from shortwave-infrared satellite observations

    NASA Astrophysics Data System (ADS)

    NOH, Y. J.; Miller, S. D.; Heidinger, A. K.

    2015-12-01

    Many studies have demonstrated the utility of multispectral information from satellite passive radiometers for detecting and retrieving the properties of cloud globally, which conventionally utilizes shortwave- and thermal-infrared bands. However, the satellite-derived cloud information comes mainly from cloud top or represents a vertically integrated property. This can produce a large bias in determining cloud phase characteristics, in particular for mixed-phase clouds which are often observed to have supercooled liquid water at cloud top but a predominantly ice phase residing below. The current satellite retrieval algorithms may report these clouds simply as supercooled liquid without any further information regarding the presence of a sub-cloud-top ice phase. More accurate characterization of these clouds is very important for climate models and aviation applications. In this study, we present a physical basis and preliminary results for the algorithm development of supercooled liquid-topped mixed-phase cloud detection using satellite radiometer observations. The detection algorithm is based on differential absorption properties between liquid and ice particles in the shortwave-infrared bands. Solar reflectance data in narrow bands at 1.6 μm and 2.25 μm are used to optically probe below clouds for distinction between supercooled liquid-topped clouds with and without an underlying mixed phase component. Varying solar/sensor geometry and cloud optical properties are also considered. The spectral band combination utilized for the algorithm is currently available on Suomi NPP Visible/Infrared Imaging Radiometer Suite (VIIRS), Himawari-8 Advanced Himawari Imager (AHI), and the future GOES-R Advance Baseline Imager (ABI). When tested on simulated cloud fields from WRF model and synthetic ABI data, favorable results were shown with reasonable threat scores (0.6-0.8) and false alarm rates (0.1-0.2). An ARM/NSA case study applied to VIIRS data also indicated promising potential of the algorithm.

  3. A risk-based approach to flammable gas detector spacing.

    PubMed

    Defriend, Stephen; Dejmek, Mark; Porter, Leisa; Deshotels, Bob; Natvig, Bernt

    2008-11-15

    Flammable gas detectors allow an operating company to address leaks before they become serious, by automatically alarming and by initiating isolation and safe venting. Without effective gas detection, there is very limited defense against a flammable gas leak developing into a fire or explosion that could cause loss of life or escalate to cascading failures of nearby vessels, piping, and equipment. While it is commonly recognized that some gas detectors are needed in a process plant containing flammable gas or volatile liquids, there is usually a question of how many are needed. The areas that need protection can be determined by dispersion modeling from potential leak sites. Within the areas that must be protected, the spacing of detectors (or alternatively, number of detectors) should be based on risk. Detector design can be characterized by spacing criteria, which is convenient for design - or alternatively by number of detectors, which is convenient for cost reporting. The factors that influence the risk are site-specific, including process conditions, chemical composition, number of potential leak sites, piping design standards, arrangement of plant equipment and structures, design of isolation and depressurization systems, and frequency of detector testing. Site-specific factors such as those just mentioned affect the size of flammable gas cloud that must be detected (within a specified probability) by the gas detection system. A probability of detection must be specified that gives a design with a tolerable risk of fires and explosions. To determine the optimum spacing of detectors, it is important to consider the probability that a detector will fail at some time and be inoperative until replaced or repaired. A cost-effective approach is based on the combined risk from a representative selection of leakage scenarios, rather than a worst-case evaluation. This means that probability and severity of leak consequences must be evaluated together. In marine and offshore facilities, it is conventional to use computational fluid dynamics (CFD) modeling to determine the size of a flammable cloud that would result from a specific leak scenario. Simpler modeling methods can be used, but the results are not very accurate in the region near the release, especially where flow obstructions are present. The results from CFD analyses on several leak scenarios can be plotted to determine the size of a flammable cloud that could result in an explosion that would generate overpressure exceeding the strength of the mechanical design of the plant. A cloud of this size has the potential to produce a blast pressure or flying debris capable of causing a fatality or subsequent damage to vessels or piping containing hazardous material. In cases where the leak results in a fire, rather than explosion, CFD or other modeling methods can estimate the size of a leak that would cause a fire resulting in subsequent damage to the facility, or would prevent the safe escape of personnel. The gas detector system must be capable of detecting a gas release or vapor cloud, and initiating action to prevent the leak from reaching a size that could cause injury or severe damage upon ignition.

  4. Improvements in Near-Terminator and Nocturnal Cloud Masks using Satellite Imager Data over the Atmospheric Radiation Measurement Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trepte, Q.Z.; Minnis, P.; Heck, P.W.

    2005-03-18

    Cloud detection using satellite measurements presents a big challenge near the terminator where the visible (VIS; 0.65 {micro}m) channel becomes less reliable and the reflected solar component of the solar infrared 3.9-{micro}m channel reaches very low signal-to-noise ratio levels. As a result, clouds are underestimated near the terminator and at night over land and ocean in previous Atmospheric Radiation Measurement (ARM) Program cloud retrievals using Geostationary Operational Environmental Satellite (GOES) imager data. Cloud detection near the terminator has always been a challenge. For example, comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer [AVHRR]) cloud coverage and Geosciencemore » Laser Altimeter System (GLAS) measurements north of 60{sup o}N indicate significant amounts of missing clouds from AVHRR because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products and GLAS at the same regions also shows the same difficulty in the MODIS cloud retrieval (Pavolonis and Heidinger 2005). Consistent detection of clouds at all times of day is needed to provide reliable cloud and radiation products for ARM and other research efforts involving the modeling of clouds and their interaction with the radiation budget. To minimize inconsistencies between daytime and nighttime retrievals, this paper develops an improved twilight and nighttime cloud mask using GOES-9, 10, and 12 imager data over the ARM sites and the continental United States (CONUS).« less

  5. Improvements in Near-Terminator and Nocturnal Cloud Masks using Satellite Image Data over the Atmospheric Radiation Measurement Sites

    NASA Technical Reports Server (NTRS)

    Trepte, Q. Z.; Minnis, P.; Heck, R. W.; Palikonda, R.

    2005-01-01

    Cloud detection using satellite measurements presents a big challenge near the terminator where the visible (VIS; 0.65 (micro)m) channel becomes less reliable and the reflected solar component of the solar infrared 3.9-(micro)m channel reaches very low signal-to-noise ratio levels. As a result, clouds are underestimated near the terminator and at night over land and ocean in previous Atmospheric Radiation Measurement (ARM) Program cloud retrievals using Geostationary Operational Environmental Satellite (GOES) imager data. Cloud detection near the terminator has always been a challenge. For example, comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer (AVHRR)) cloud coverage and Geoscience Laser Altimeter System (GLAS) measurements north of 60 degrees N indicate significant amounts of missing clouds from AVHRR because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products and GLAS at the same regions also shows the same difficulty in the MODIS cloud retrieval (Pavolonis and Heidinger 2005). Consistent detection of clouds at all times of day is needed to provide reliable cloud and radiation products for ARM and other research efforts involving the modeling of clouds and their interaction with the radiation budget. To minimize inconsistencies between daytime and nighttime retrievals, this paper develops an improved twilight and nighttime cloud mask using GOES-9, 10, and 12 imager data over the ARM sites and the continental United States (CONUS).

  6. A new cloud and aerosol layer detection method based on micropulse lidar measurements

    NASA Astrophysics Data System (ADS)

    Zhao, Chuanfeng; Wang, Yuzhao; Wang, Qianqian; Li, Zhanqing; Wang, Zhien; Liu, Dong

    2014-06-01

    This paper introduces a new algorithm to detect aerosols and clouds based on micropulse lidar measurements. A semidiscretization processing technique is first used to inhibit the impact of increasing noise with distance. The value distribution equalization method which reduces the magnitude of signal variations with distance is then introduced. Combined with empirical threshold values, we determine if the signal waves indicate clouds or aerosols. This method can separate clouds and aerosols with high accuracy, although differentiation between aerosols and clouds are subject to more uncertainties depending on the thresholds selected. Compared with the existing Atmospheric Radiation Measurement program lidar-based cloud product, the new method appears more reliable and detects more clouds with high bases. The algorithm is applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu sites. At the SGP site, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring and shows bimodal vertical distributions with maximum occurrences at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. The dominant clouds are stratiform in winter and convective in summer. By contrast, the cloud frequency at the Taihu site shows no clear seasonal variation and the maximum occurrence is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at the SGP site. A seasonal analysis of cloud base occurrence frequency suggests that stratiform clouds dominate at the Taihu site.

  7. Differences between automatically detected and steady-state fractional flow reserve.

    PubMed

    Härle, Tobias; Meyer, Sven; Vahldiek, Felix; Elsässer, Albrecht

    2016-02-01

    Measurement of fractional flow reserve (FFR) has become a standard diagnostic tool in the catheterization laboratory. FFR evaluation studies were based on pressure recordings during steady-state maximum hyperemia. Commercially available computer systems detect the lowest Pd/Pa ratio automatically, which might not always be measured during steady-state hyperemia. We sought to compare the automatically detected FFR and true steady-state FFR. Pressure measurement traces of 105 coronary lesions from 77 patients with intermediate coronary lesions or multivessel disease were reviewed. In all patients, hyperemia had been achieved by intravenous adenosine administration using a dosage of 140 µg/kg/min. In 42 lesions (40%) automatically detected FFR was lower than true steady-state FFR. Mean bias was 0.009 (standard deviation 0.015, limits of agreement -0.02, 0.037). In 4 lesions (3.8%) both methods lead to different treatment recommendations, in all 4 cases instantaneous wave-free ratio confirmed steady-state FFR. Automatically detected FFR was slightly lower than steady-state FFR in more than one-third of cases. Consequently, interpretation of automatically detected FFR values closely below the cutoff value requires special attention.

  8. A one year Landsat 8 conterminous United States study of spatial and temporal patterns of cirrus and non-cirrus clouds and implications for the long term Landsat archive.

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Roy, D. P.

    2014-12-01

    The successful February 2013 launch of the Landsat 8 satellite is continuing the 40+ year legacy of the Landsat mission. The payload includes the Operational Land Imager (OLI) that has a new 1370 mm band designed to monitor cirrus clouds and the Thermal Infrared Sensor (TIRS) that together provide 30m low, medium and high confidence cloud detections and 30m low and high confidence cirrus cloud detections. A year of Landsat 8 data over the Conterminous United States (CONUS), composed of 11,296 acquisitions, was analyzed comparing the spatial and temporal incidence of these cloud and cirrus states. This revealed (i) 36.5% of observations were detected with high confidence cloud with spatio-temporal patterns similar to those observed by previous Landsat 7 cloud analyses, (ii) 29.2% were high confidence cirrus, (iii) 20.9% were both high confidence cloud and high confidence cirrus, (iv) 8.3% were detected as high confidence cirrus but not as high confidence cloud. The results illustrate the value of the cirrus band for improved Landsat 8 terrestrial monitoring but imply that the historical CONUS Landsat archive has a similar 8% of undetected cirrus contaminated pixels. The implications for long term Landsat time series records, including the global Web Enabled Landsat Data (WELD) product record, are discussed.

  9. H31G-1596: DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites

    NASA Technical Reports Server (NTRS)

    Kalia, Subodh; Ganguly, Sangram; Li, Shuang; Nemani, Ramakrishna R.

    2017-01-01

    Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remote sensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud shadow mask from geostationary satellite data is critical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds,which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classify cloudshadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoderdecoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multispectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.

  10. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  11. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  12. MODSNOW-Tool: an operational tool for daily snow cover monitoring using MODIS data

    NASA Astrophysics Data System (ADS)

    Gafurov, Abror; Lüdtke, Stefan; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Schöne, Tilo; Schmidt, Sebastian; Kalashnikova, Olga; Merz, Bruno

    2017-04-01

    Spatially distributed snow cover information in mountain areas is extremely important for water storage estimations, seasonal water availability forecasting, or the assessment of snow-related hazards (e.g. enhanced snow-melt following intensive rains, or avalanche events). Moreover, spatially distributed snow cover information can be used to calibrate and/or validate hydrological models. We present the MODSNOW-Tool - an operational monitoring tool offers a user-friendly application which can be used for catchment-based operational snow cover monitoring. The application automatically downloads and processes freely available daily Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data. The MODSNOW-Tool uses a step-wise approach for cloud removal and delivers cloud-free snow cover maps for the selected river basins including basin specific snow cover extent statistics. The accuracy of cloud-eliminated MODSNOW snow cover maps was validated for 84 almost cloud-free days in the Karadarya river basin in Central Asia, and an average accuracy of 94 % was achieved. The MODSNOW-Tool can be used in operational and non-operational mode. In the operational mode, the tool is set up as a scheduled task on a local computer allowing automatic execution without user interaction and delivers snow cover maps on a daily basis. In the non-operational mode, the tool can be used to process historical time series of snow cover maps. The MODSNOW-Tool is currently implemented and in use at the national hydrometeorological services of four Central Asian states - Kazakhstan, Kyrgyzstan, Uzbekistan and Turkmenistan and used for seasonal water availability forecast.

  13. Modeling the Diffuse Cloud-Top Optical Emissions from Ground and Cloud Flashes

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard; Koshak, William

    2008-01-01

    A number of studies have indicated that the diffuse cloud-top optical emissions from intra-cloud (IC) lightning are brighter than that from normal negative cloud-to-ground (CG) lightning, and hence would be easier to detect from a space-based sensor. The primary reason provided to substantiate this claim has been that the IC is at a higher altitude within the cloud and therefore is less obscured by the cloud multiple scattering medium. CGs at lower altitudes embedded deep within the cloud are more obscured, so CG detection is thought to be more difficult. However, other authors claim that because the CG source current (and hence luminosity) is typically substantially larger than IC currents, the greater CG source luminosity is large enough to overcome the effects of multiple scattering. These investigators suggest that the diffuse cloud top emissions from CGs are brighter than from ICs, and hence are easier to detect from space. Still other investigators claim that the detection efficiency of CGs and ICs is about the same because modern detector sensitivity is good enough to "see" either flash type no matter which produces a brighter cloud top emission. To better assess which of these opinions should be accepted, we introduce an extension of a Boltzmann lightning radiative transfer model previously developed. It considers characteristics of the cloud (geometry, dimensions, scattering properties) and specific lightning channel properties (length, geometry, location, current, optical wave front propagation speed/direction). As such, it represents the most detailed modeling effort to date. At least in the few cases studied thus far, it was found that IC flashes appear brighter at cloud top than the lower altitude negative ground flashes, but additional model runs are to be examined before finalizing our general conclusions.

  14. Cloud Statistics and Discrimination in the Polar Regions

    NASA Astrophysics Data System (ADS)

    Chan, M.; Comiso, J. C.

    2012-12-01

    Despite their important role in the climate system, cloud cover and their statistics are poorly known, especially in the polar regions, where clouds are difficult to discriminate from snow covered surfaces. The advent of the A-train, which included Aqua/MODIS, CALIPSO/CALIOP and CloudSat/CPR sensors has provided an opportunity to improve our ability to accurately characterize the cloud cover. MODIS provides global coverage at a relatively good temporal and spatial resolution while CALIOP and CPR provide limited nadir sampling but accurate characterization of the vertical structure and phase of the cloud cover. Over the polar regions, cloud detection from a passive sensors like MODIS is challenging because of the presence of cold and highly reflective surfaces such as snow, sea-ice, glaciers, and ice-sheet, which have surface signatures similar to those of clouds. On the other hand, active sensors such as CALIOP and CPR are not only very sensitive to the presence of clouds but can also provide information about its microphysical characteristics. However, these nadir-looking sensors have sparse spatial coverage and their global data can have data spatial gaps of up to 100 km. We developed a polar cloud detection system for MODIS that is trained using collocated data from CALIOP and CPR. In particular, we employ a machine learning system that reads the radiative profile observed by MODIS and determine whether the field of view is cloudy or clear. Results have shown that the improved cloud detection scheme performs better than typical cloud mask algorithms using a validation data set not used for training. A one-year data set was generated and results indicate that daytime cloud detection accuracies improved from 80.1% to 92.6% (over sea-ice) and 71.2% to 87.4% (over ice-sheet) with CALIOP data used as the baseline. Significant improvements are also observed during nighttime, where cloud detection accuracies increase by 19.8% (over sea-ice) and 11.6% (over ice-sheet). The immediate impact of the new algorithm is that it can minimize large biases of MODIS-derived cloud amount over the Polar Regions and thus a more realistic and high quality global cloud statistics. In particular, our results show that cloud fraction in the Arctic is typically 81.2 % during daytime and 84.0% during nighttime. This is significantly higher than the 71.8% and 58.5%, respectively, derived from standard MODIS cloud product.

  15. Testing a polarimetric cloud imager aboard research vessel Polarstern: comparison of color-based and polarimetric cloud detection algorithms.

    PubMed

    Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas

    2015-02-10

    Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.

  16. Lightning studies using LDAR and LLP data

    NASA Technical Reports Server (NTRS)

    Forbes, Gregory S.

    1993-01-01

    This study intercompared lightning data from LDAR and LLP systems in order to learn more about the spatial relationships between thunderstorm electrical discharges aloft and lightning strikes to the surface. The ultimate goal of the study is to provide information that can be used to improve the process of real-time detection and warning of lightning by weather forecasters who issue lightning advisories. The Lightning Detection and Ranging (LDAR) System provides data on electrical discharges from thunderstorms that includes cloud-ground flashes as well as lightning aloft (within cloud, cloud-to-cloud, and sometimes emanating from cloud to clear air outside or above cloud). The Lightning Location and Protection (LLP) system detects primarily ground strikes from lightning. Thunderstorms typically produce LDAR signals aloft prior to the first ground strike, so that knowledge of preferred positions of ground strikes relative to the LDAR data pattern from a thunderstorm could allow advance estimates of enhanced ground strike threat. Studies described in the report examine the position of LLP-detected ground strikes relative to the LDAR data pattern from the thunderstorms. The report also describes other potential approaches to the use of LDAR data in the detection and forecasting of lightning ground strikes.

  17. Stratocumulus Precipitation and Entrainment Experiment (SPEE) Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albrecht, Bruce; Ghate, Virendra; CADeddu, Maria

    2016-06-01

    The scientific focus of this project was to examine precipitation and entrainment processes in marine stratocumulus clouds. The entrainment studies focused on characterizing cloud turbulence at cloud top using Doppler cloud radar observations. The precipitation studies focused on characterizing the precipitation and the macroscopic properties (cloud thickness, and liquid water path) of the clouds. This project will contribute to the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s overall objective of providing the remote-sensing observations needed to improve the representation of key cloud processes in climate models. It will be of direct relevance to the componentsmore » of ARM dealing with entrainment and precipitation processes in stratiform clouds. Further, the radar observing techniques that will be used in this study were developed using ARM Southern Great Plains (SGP) facility observations under Atmospheric System Research (ASR) support. The observing systems operating automatously from a site located just north of the Center for Interdisciplinary Remotely-Piloted Aircraft Studies (CIRPAS) aircraft hangar in Marina, California during the period of 1 May to 4 November 2015 included: 1. Microwave radiometer: ARM Microwave Radiometer, 3-Channel (MWR3C) with channels centered at 23.834, 30, and 89 GHz; supported by Dr. Maria Cadeddu. 2. Cloud Radar: CIRPAS 95 GHz Frequency Modulated Continuous Wave (FMCW) Cloud Radar (Centroid Frequency Chirp Rate [CFCR]); operations overseen by Drs. Ghate and Albrecht. 3. Ceilometer: Vaisala CK-14; operations overseen by Drs. Ghate and Albrecht.« less

  18. A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.

    2016-01-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.

  19. Looking Down Through the Clouds – Optical Attenuation through Real-Time Clouds

    NASA Astrophysics Data System (ADS)

    Burley, J.; Lazarewicz, A.; Dean, D.; Heath, N.

    Detecting and identifying nuclear explosions in the atmosphere and on the surface of the Earth is critical for the Air Force Technical Applications Center (AFTAC) treaty monitoring mission. Optical signals, from surface or atmospheric nuclear explosions detected by satellite sensors, are attenuated by the atmosphere and clouds. Clouds present a particularly complex challenge as they cover up to seventy percent of the earth's surface. Moreover, their highly variable and diverse nature requires physics-based modeling. Determining the attenuation for each optical ray-path is uniquely dependent on the source geolocation, the specific optical transmission characteristics along that ray path, and sensor detection capabilities. This research details a collaborative AFTAC and AFIT effort to fuse worldwide weather data, from a variety of sources, to provide near-real-time profiles of atmospheric and cloud conditions and the resulting radiative transfer analysis for virtually any wavelength(s) of interest from source to satellite. AFIT has developed a means to model global clouds using the U.S. Air Force’s World Wide Merged Cloud Analysis (WWMCA) cloud data in a new toolset that enables radiance calculations through clouds from UV to RF wavelengths.

  20. Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer

    DTIC Science & Technology

    2016-06-01

    TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer Leon Vaizer, Jesse Angle, Neil...of Magnetic Dipole Targets Using LSG i June 2016 TABLE OF CONTENTS INTRODUCTION

  1. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.

  2. Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning

    NASA Astrophysics Data System (ADS)

    Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun

    2016-04-01

    This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both. However, kilns with a flat interior and a shallow ditch along the circumference were often missed by the automatic detection method. The successfulness of automatic detection seems to depend on two factors: (1) the density of ALS ground hits on the cultural heritage structures being sought, and (2) to what extent these structures stand out from natural terrain structures. The first factor may, to some extent, be improved by using a higher number of ALS pulses per square meter. The second factor is difficult to change, and also highlights another challenge: how to make a general automatic method that is applicable in all types of terrain within a country. The mixed experience with traditional pattern recognition for semi-automatic mapping of cultural heritage led us to consider deep learning as an alternative approach. The main principle is that a general feature detector has been trained on a large image database. The feature detector is then tailored to a specific task by using a modest number of images of true and false examples of the features being sought. Results of using deep learning are compared with previous results using traditional pattern recognition.

  3. Detection of long duration cloud contamination in hyper-temporal NDVI imagery

    NASA Astrophysics Data System (ADS)

    Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.

    2012-04-01

    NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.

  4. Powerful Hurricane Irma Seen in 3D by NASA's CloudSat

    NASA Image and Video Library

    2017-09-08

    NASA's CloudSat satellite flew over Hurricane Irma on Sept. 6, 2017 at 1:45 p.m. EDT (17:45 UTC) as the storm was approaching Puerto Rico in the Atlantic Ocean. Hurricane Irma contained estimated maximum sustained winds of 185 miles per hour (160 knots) with a minimum pressure of 918 millibars. CloudSat transected the eastern edge of Hurricane Irma's eyewall, revealing details of the storm's cloud structure beneath its thick canopy of cirrus clouds. The CloudSat Cloud Profiling Radar excels in detecting the organization and placement of cloud layers beneath a storm's cirrus canopy, which are not readily detected by other satellite sensors. The CloudSat overpass reveals the inner details beneath the cloud tops of this large system; intense areas of convection with moderate to heavy rainfall (deep red and pink colors), cloud-free areas (moats) in between the inner and outer cloud bands of Hurricane Irma and cloud top heights averaging around 9 to 10 miles (15 to 16 kilometers). Lower values of reflectivity (areas of green and blue) denote smaller-sized ice and water particle sizes typically located at the top of a storm system (in the anvil area). The Cloud Profiling Radar loses signal at around 3 miles (5 kilometers) in height (in the melting layer) due to water (ice) particles larger than 0.12 inches (3 millimeters) in diameter. Moderate to heavy rainfall occurs in these areas where signal weakening is detectable. Smaller cumulus and cumulonimbus cloud types are evident as CloudSat moves farther south, beneath the thick cirrus canopy. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21947

  5. Shadow detection and removal in RGB VHR images for land use unsupervised classification

    NASA Astrophysics Data System (ADS)

    Movia, A.; Beinat, A.; Crosilla, F.

    2016-09-01

    Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.

  6. Users' instructions for the NASA/MSFC cloud-rise preprocessor program, version 6, and the NASA/MSFC multilayer diffusion program, version 6: Research version for Univac 1108 system

    NASA Technical Reports Server (NTRS)

    Bjorklund, J. R.

    1978-01-01

    The cloud-rise preprocessor and multilayer diffusion computer programs were used by NASA in predicting concentrations and dosages downwind from normal and abnormal launches of rocket vehicles. These programs incorporated: (1) the latest data for the heat content and chemistry of rocket exhaust clouds; (2) provision for the automated calculation of surface water pH due to deposition of HCl from precipitation scavenging; (3) provision for automated calculation of concentration and dosage parameters at any level within the vertical grounds for which meteorological inputs have been specified; and (4) provision for execution of multiple cases of meteorological data. Procedures used to automatically calculate wind direction shear in a layer were updated.

  7. Challenges in the Development of a Self-Calibrating Network of Ceilometers.

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Wagner, Frank; Mattis, Ina; Baars, Holger; Haefele, Alexander

    2015-04-01

    There are more than 700 Automatic Lidars and Ceilometers (ALCs) currently operating in Europe. Modern ceilometers can do more than simply measure the cloud base height. They can also measure aerosol layers like volcanic ash, Saharan dust or aerosols within the planetary boundary layer. In the frame of E-PROFILE, which is part of EUMETNET, a European network of automatic lidars and ceilometers will be set up exploiting this new capability. To be able to monitor the evolution of aerosol layers over a large spatial scale, the measurements need to be consistent from one site to another. Currently, most of the instruments do not provide calibrated, only relative measurements. Thus, it is necessary to calibrate the instruments to develop a consistent product for all the instruments from various network and to combine them in an European Network like E-PROFILE. As it is not possible to use an external reference (like a sun photometer or a Raman Lidar) to calibrate all the ALCs in the E-PROFILE network, it is necessary to use a self-calibration algorithm. Two calibration methods have been identified which are suited for automated use in a network: the Rayleigh and the liquid cloud calibration methods In the Rayleigh method, backscatter signals from molecules (this is the Rayleigh signal) can be measured and used to calculate the lidar constant (Wiegner et al. 2012). At the wavelength used for most ceilometers, this signal is weak and can be easily measured only during cloud-free nights. However, with the new algorithm implemented in the frame of the TOPROF COST Action, the Rayleigh calibration was successfully performed on a CHM15k for more than 50% of the nights from October 2013 to September 2014. This method was validated against two reference instruments, the collocated EARLINET PollyXT lidar and the CALIPSO space-borne lidar. The lidar constant was on average within 5.5% compare to the lidar constant determined by the EARLINET lidar. It confirms the validity of the self-calibration method. For 3 CALIPSO overpasses the agreement was on average 20.0%. It is less accurate due to the large uncertainties of CALIPSO data close to the surface. In opposition to the Rayleigh method, Cloud calibration method uses the complete attenuation of the transmitter beam by a liquid water cloud to calculate the lidar constant (O'Connor 2004). The main challenge is the selection of accurately measured water clouds. These clouds should not contain any ice crystals and the detector should not get into saturation. The first problem is especially important during winter time and the second problem is especially important for low clouds. Furthermore the overlap function should be known accurately, especially when the water cloud is located at a distance where the overlap between laser beam and telescope field-of-view is still incomplete. In the E-PROFILE pilot network, the Rayleigh calibration is already performed automatically. This demonstration network maked available, in real time, calibrated ALC measurements from 8 instruments of 4 different types in 6 countries. In collaboration with TOPROF and 20 national weathers services, E-PROFILE will provide, in 2017, near real time ALC measurements in most of Europe.

  8. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  9. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  10. An Investigation into Specifying Service Level Agreements for Provisioning Cloud Computing Services

    DTIC Science & Technology

    2012-12-01

    IT .................................................................................................... Information Technology KPI ...the service delivery be measured? 3. Key Performance Indicators ( KPIs ): Describe the KPIs and the responsible party for producing the KPIs . 4...level objectives (SLOs) that are evaluated according to measurable Key Performance Indicators ( KPIs ). Automatic SLA protection enables further

  11. A Study of Global Cirrus Cloud Morphology with AIRS Cloud-clear Radiances (CCRs)

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Gong, Jie

    2012-01-01

    Version 6 (V6) AIRS cloud-clear radiances (CCR) are used to derive cloud-induced radiance (Tcir=Tb-CCR) at the infrared frequencies of weighting functions peaked in the middle troposphere. The significantly improved V 6 CCR product allows a more accurate estimation of the expected clear-sky radiance as if clouds are absent. In the case where strong cloud scattering is present, the CCR becomes unreliable, which is reflected by its estimated uncertainty, and interpolation is employed to replace this CCR value. We find that Tcir derived from this CCR method are much better than other methods and detect more clouds in the upper and lower troposphere as well as in the polar regions where cloud detection is particularly challenging. The cloud morphology derived from the V6 test month, as well as some artifacts, will be shown.

  12. Very high cloud detection in more than two decades of HIRS data

    NASA Astrophysics Data System (ADS)

    Kolat, Utkan; Menzel, W. Paul; Olson, Erik; Frey, Richard

    2013-04-01

    This paper reports on the use of High-resolution Infrared Radiation Sounder (HIRS) measurements to infer the presence of upper tropospheric and lower stratospheric (UT/LS) clouds. UT/LS cloud detection is based on the fact that, when viewing an opaque UT/LS cloud that fills the sensor field of view, positive lapse rates above the tropopause cause a more absorbing CO2 or H2O-sensitive spectral band to measure a brightness temperature warmer than that of a less absorbing or nearly transparent infrared window spectral band. The HIRS sensor has flown on 16 polar-orbiting satellites from TIROS-N through NOAA-19 and Metop-A and -B, forming the only 30 year record that includes H2O and CO2-sensitive spectral bands enabling the detection of these UT/LS clouds. Comparison with collocated Cloud-Aerosol Lidar with Orthogonal Polarization data reveals that 97% of the HIRS UT/LS cloud determinations are within 2.5 km of the tropopause (defined as the coldest level in the National Centers for Environmental Prediction Global Data Assimilation System); more clouds are found above the tropopause than below. From NOAA-14 data spanning 1995 through 2005, we find indications of UT/LS clouds in 0.7% of the observations from 60N to 60S using CO2 absorption bands; however, in the region of the Inter-Tropical Convergence Zone (ITCZ), this increases to 1.7%. During El Niño years, UT/LS clouds shift eastward out of their normal location in the western Pacific region. Monthly trends from 1987 through 2011 using data from NOAA-10 onwards show decreases in UT/LS cloud detection in the region of the ITCZ from 1987 until 1996, increases until 2001, and decreases thereafter.

  13. External Influences on Modeled and Observed Cloud Trends

    NASA Technical Reports Server (NTRS)

    Marvel, Kate; Zelinka, Mark; Klein, Stephen A.; Bonfils, Celine; Caldwell, Peter; Doutriaux, Charles; Santer, Benjamin D.; Taylor, Karl E.

    2015-01-01

    Understanding the cloud response to external forcing is a major challenge for climate science. This crucial goal is complicated by intermodel differences in simulating present and future cloud cover and by observational uncertainty. This is the first formal detection and attribution study of cloud changes over the satellite era. Presented herein are CMIP5 (Coupled Model Intercomparison Project - Phase 5) model-derived fingerprints of externally forced changes to three cloud properties: the latitudes at which the zonally averaged total cloud fraction (CLT) is maximized or minimized, the zonal average CLT at these latitudes, and the height of high clouds at these latitudes. By considering simultaneous changes in all three properties, the authors define a coherent multivariate fingerprint of cloud response to external forcing and use models from phase 5 of CMIP (CMIP5) to calculate the average time to detect these changes. It is found that given perfect satellite cloud observations beginning in 1983, the models indicate that a detectable multivariate signal should have already emerged. A search is then made for signals of external forcing in two observational datasets: ISCCP (International Satellite Cloud Climatology Project) and PATMOS-x (Advanced Very High Resolution Radiometer (AVHRR) Pathfinder Atmospheres - Extended). The datasets are both found to show a poleward migration of the zonal CLT pattern that is incompatible with forced CMIP5 models. Nevertheless, a detectable multivariate signal is predicted by models over the PATMOS-x time period and is indeed present in the dataset. Despite persistent observational uncertainties, these results present a strong case for continued efforts to improve these existing satellite observations, in addition to planning for new missions.

  14. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    PubMed

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  15. NASA Wrangler: Automated Cloud-Based Data Assembly in the RECOVER Wildfire Decision Support System

    NASA Technical Reports Server (NTRS)

    Schnase, John; Carroll, Mark; Gill, Roger; Wooten, Margaret; Weber, Keith; Blair, Kindra; May, Jeffrey; Toombs, William

    2017-01-01

    NASA Wrangler is a loosely-coupled, event driven, highly parallel data aggregation service designed to take advantageof the elastic resource capabilities of cloud computing. Wrangler automatically collects Earth observational data, climate model outputs, derived remote sensing data products, and historic biophysical data for pre-, active-, and post-wildfire decision making. It is a core service of the RECOVER decision support system, which is providing rapid-response GIS analytic capabilities to state and local government agencies. Wrangler reduces to minutes the time needed to assemble and deliver crucial wildfire-related data.

  16. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  17. Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.

    PubMed

    Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L

    2015-07-01

    The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.

  18. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  19. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

  20. Development of Cloud-Based UAV Monitoring and Management System

    PubMed Central

    Itkin, Mason; Kim, Mihui; Park, Younghee

    2016-01-01

    Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation. PMID:27854267

  1. Development of Cloud-Based UAV Monitoring and Management System.

    PubMed

    Itkin, Mason; Kim, Mihui; Park, Younghee

    2016-11-15

    Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation.

  2. Open Pit Mine 3d Mapping by Tls and Digital Photogrammetry: 3d Model Update Thanks to a Slam Based Approach

    NASA Astrophysics Data System (ADS)

    Vassena, G.; Clerici, A.

    2018-05-01

    The state of the art of 3D surveying technologies, if correctly applied, allows to obtain 3D coloured models of large open pit mines using different technologies as terrestrial laser scanner (TLS), with images, combined with UAV based digital photogrammetry. GNSS and/or total station are also currently used to geo reference the model. The University of Brescia has been realised a project to map in 3D an open pit mine located in Botticino, a famous location of marble extraction close to Brescia in North Italy. Terrestrial Laser Scanner 3D point clouds combined with RGB images and digital photogrammetry from UAV have been used to map a large part of the cave. By rigorous and well know procedures a 3D point cloud and mesh model have been obtained using an easy and rigorous approach. After the description of the combined mapping process, the paper describes the innovative process proposed for the daily/weekly update of the model itself. To realize this task a SLAM technology approach is described, using an innovative approach based on an innovative instrument capable to run an automatic localization process and real time on the field change detection analysis.

  3. Faster, efficient and secure collection of research images: the utilization of cloud technology to expand the OMI-DB

    NASA Astrophysics Data System (ADS)

    Patel, M. N.; Young, K.; Halling-Brown, M. D.

    2018-03-01

    The demand for medical images for research is ever increasing owing to the rapid rise in novel machine learning approaches for early detection and diagnosis. The OPTIMAM Medical Image Database (OMI-DB)1,2 was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, annotations and expert-determined ground truths. Since the inception of the database in early 2011, the volume of images and associated data collected has dramatically increased owing to automation of the collection pipeline and inclusion of new sites. Currently, these data are stored at each respective collection site and synced periodically to a central store. This leads to a large data footprint at each site, requiring large physical onsite storage, which is expensive. Here, we propose an update to the OMI-DB collection system, whereby the storage of all the data is automatically transferred to the cloud on collection. This change in the data collection paradigm reduces the reliance of physical servers at each site; allows greater scope for future expansion; and removes the need for dedicated backups and improves security. Moreover, with the number of applications to access the data increasing rapidly with the maturity of the dataset cloud technology facilities faster sharing of data and better auditing of data access. Such updates, although may sound trivial; require substantial modification to the existing pipeline to ensure data integrity and security compliance. Here, we describe the extensions to the OMI-DB collection pipeline and discuss the relative merits of the new system.

  4. Automatic multimodal detection for long-term seizure documentation in epilepsy.

    PubMed

    Fürbass, F; Kampusch, S; Kaniusas, E; Koren, J; Pirker, S; Hopfengärtner, R; Stefan, H; Kluge, T; Baumgartner, C

    2017-08-01

    This study investigated sensitivity and false detection rate of a multimodal automatic seizure detection algorithm and the applicability to reduced electrode montages for long-term seizure documentation in epilepsy patients. An automatic seizure detection algorithm based on EEG, EMG, and ECG signals was developed. EEG/ECG recordings of 92 patients from two epilepsy monitoring units including 494 seizures were used to assess detection performance. EMG data were extracted by bandpass filtering of EEG signals. Sensitivity and false detection rate were evaluated for each signal modality and for reduced electrode montages. All focal seizures evolving to bilateral tonic-clonic (BTCS, n=50) and 89% of focal seizures (FS, n=139) were detected. Average sensitivity in temporal lobe epilepsy (TLE) patients was 94% and 74% in extratemporal lobe epilepsy (XTLE) patients. Overall detection sensitivity was 86%. Average false detection rate was 12.8 false detections in 24h (FD/24h) for TLE and 22 FD/24h in XTLE patients. Utilization of 8 frontal and temporal electrodes reduced average sensitivity from 86% to 81%. Our automatic multimodal seizure detection algorithm shows high sensitivity with full and reduced electrode montages. Evaluation of different signal modalities and electrode montages paces the way for semi-automatic seizure documentation systems. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  5. Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds

    NASA Astrophysics Data System (ADS)

    Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.

    2008-05-01

    CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.

  6. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  7. Evaluation of Multilayer Cloud Detection Using a MODIS CO2-Slicing Algorithm With CALIPSO-CloudSat Measurements

    NASA Technical Reports Server (NTRS)

    Viudez-Mora, Antonio; Kato, Seiji

    2015-01-01

    This work evaluates the multilayer cloud (MCF) algorithm based on CO2-slicing techniques against CALISPO-CloudSat (CLCS) measurement. This evaluation showed that the MCF underestimates the presence of multilayered clouds compared with CLCS and are retrained to cloud emissivities below 0.8 and cloud optical septs no larger than 0.3.

  8. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  9. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  10. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  11. Automatic Large-Scalae 3d Building Shape Refinement Using Conditional Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; d'Angelo, P.; Körner, M.; Reinartz, P.

    2018-05-01

    Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the digital surface models (DSMs). The DSMs can be obtained either by light detection and ranging (LIDAR), SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a conditional generative adversarial network (cGAN) to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.

  12. Photogrammetric retrieval of volcanic ash cloud top height from SEVIRI and MODIS

    NASA Astrophysics Data System (ADS)

    Zakšek, Klemen; Hort, Matthias; Zaletelj, Janez; Langmann, Bärbel

    2013-04-01

    Even if erupting in remote areas, volcanoes can have a significant impact on the modern society due to volcanic ash dispersion in the atmosphere. The ash does not affect merely air traffic - its transport in the atmosphere and its deposition on land and in the oceans may also significantly influence the climate through modifications of atmospheric CO2. The emphasis of this contribution is the retrieval of volcanic ash plume height (ACTH). ACTH is important information especially for air traffic but also to predict ash transport and to estimate the mass flux of the ejected material. ACTH is usually estimated from ground measurements, pilot reports, or satellite remote sensing. But ground based instruments are often not available at remote volcanoes and also the pilots reports are a matter of chance. Volcanic ash cloud top height (ACTH) can be monitored on the global level using satellite remote sensing. The most often used method compares brightness temperature of the cloud with the atmospheric temperature profile. Because of uncertainties of this method (unknown emissivity of the ash cloud, tropopause, etc.) we propose photogrammetric methods based on the parallax between data retrieved from geostationary (SEVIRI) and polar orbiting satellites (MODIS). The parallax is estimated using automatic image matching in three level image pyramids. The procedure works well if the data from both satellites are retrieved nearly simultaneously. MODIS does not retrieve the data at exactly the same time as SEVIRI. To compensate for advection we use two sequential SEVIRI images (one before and one after the MODIS retrieval) and interpolate the cloud position from SEVIRI data to the time of MODIS retrieval. ACTH is then estimated by intersection of corresponding lines-of-view from MODIS and interpolated SEVIRI data. The proposed method was tested using MODIS band 1 and SEVIRI HRV band for the case of the Eyjafjallajökull eruption in April 2010. The parallax between MODIS and SEVIRI data can reach over 30 km which implies ACTH of more than 12 km. The accuracy of ACTH was estimated to 0.6 km. The limitation of this procedure is that it has difficulties with automatic image matching if the ash cloud is not opaque.

  13. Results from the Two-Year Infrared Cloud Imager Deployment at ARM's NSA Observatory in Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Shaw, J. A.; Nugent, P. W.

    2016-12-01

    Ground-based longwave-infrared (LWIR) cloud imaging can provide continuous cloud measurements in the Arctic. This is of particular importance during the Arctic winter when visible wavelength cloud imaging systems cannot operate. This method uses a thermal infrared camera to observe clouds and produce measurements of cloud amount and cloud optical depth. The Montana State University Optical Remote Sensor Laboratory deployed an infrared cloud imager (ICI) at the Atmospheric Radiation Monitoring North Slope of Alaska site at Barrow, AK from July 2012 through July 2014. This study was used to both understand the long-term operation of an ICI in the Arctic and to study the consistency of the ICI data products in relation to co-located active and passive sensors. The ICI was found to have a high correlation (> 0.92) with collocated cloud instruments and to produce an unbiased data product. However, the ICI also detects thin clouds that are not detected by most operational cloud sensors. Comparisons with high-sensitivity actively sensed cloud products confirm the existence of these thin clouds. Infrared cloud imaging systems can serve a critical role in developing our understanding of cloud cover in the Arctic by provided a continuous annual measurement of clouds at sites of interest.

  14. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  15. A simplified Suomi NPP VIIRS dust detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yikun; Sun, Lin; Zhu, Jinshan; Wei, Jing; Su, Qinghua; Sun, Wenxiao; Liu, Fangwei; Shu, Meiyan

    2017-11-01

    Due to the complex characteristics of dust and sparse ground-based monitoring stations, dust monitoring is facing severe challenges, especially in dust storm-prone areas. Aim at constructing a high-precision dust storm detection model, a pixel database, consisted of dusts over a variety of typical feature types such as cloud, vegetation, Gobi and ice/snow, was constructed, and their distributions of reflectance and Brightness Temperatures (BT) were analysed, based on which, a new Simplified Dust Detection Algorithm (SDDA) for the Suomi National Polar-Orbiting Partnership Visible infrared Imaging Radiometer (NPP VIIRS) is proposed. NPP VIIRS images covering the northern China and Mongolian regions, where features serious dust storms, were selected to perform the dust detection experiments. The monitoring results were compared with the true colour composite images, and results showed that most of the dust areas can be accurately detected, except for fragmented thin dusts over bright surfaces. The dust ground-based measurements obtained from the Meteorological Information Comprehensive Analysis and Process System (MICAPS) and the Ozone Monitoring Instrument Aerosol Index (OMI AI) products were selected for comparison purposes. Results showed that the dust monitoring results agreed well in the spatial distribution with OMI AI dust products and the MICAPS ground-measured data with an average high accuracy of 83.10%. The SDDA is relatively robust and can realize automatic monitoring for dust storms.

  16. Review of automatic detection of pig behaviours by using image analysis

    NASA Astrophysics Data System (ADS)

    Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao

    2017-06-01

    Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.

  17. Green Bank Telescope Detection of HI Clouds in the Fermi Bubble Wind

    NASA Astrophysics Data System (ADS)

    Lockman, Felix; Di Teodoro, Enrico M.; McClure-Griffiths, Naomi M.

    2018-01-01

    We used the Robert C. Byrd Green Bank Telescope to map HI 21cm emission in two large regions around the Galactic Center in a search for HI clouds that might be entrained in the nuclear wind that created the Fermi bubbles. In a ~160 square degree region at |b|>4 deg. and |long|<10 deg we detect 106 HI clouds that have large non-circular velocities consistent with their acceleration by the nuclear wind. Rapidly moving clouds are found as far as 1.5 kpc from the center; there are no detectable asymmetries in the cloud populations above and below the Galactic Center. The cloud kinematics is modeled as a population with an outflow velocity of 330 km/s that fills a cone with an opening angle ~140 degrees. The total mass in the clouds is ~10^6 solar masses and we estimate cloud lifetimes to be between 2 and 8 Myr, implying a cold gas mass-loss rate of about 0.1 solar masses per year into the nuclear wind.The Green Bank Telescope is a facility of the National Science Foundation, operated under a cooperative agreement by Associated Universities, Inc.

  18. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  19. Using sky radiances measured by ground based AERONET Sun-Radiometers for cirrus cloud detection

    NASA Astrophysics Data System (ADS)

    Sinyuk, A.; Holben, B. N.; Eck, T. F.; Slutsker, I.; Lewis, J. R.

    2013-12-01

    Screening of cirrus clouds using observations of optical depth (OD) only has proven to be a difficult task due mostly to some clouds having temporally and spatially stable OD. On the other hand, the sky radiances measurements which in AERONET protocol are taken throughout the day may contain additional cloud information. In this work the potential of using sky radiances for cirrus cloud detection is investigated. The detection is based on differences in the angular shape of sky radiances due to cirrus clouds and aerosol (see Figure). The range of scattering angles from 3 to 6 degrees was selected due to two primary reasons: high sensitivity to cirrus clouds presence, and close proximity to the Sun. The angular shape of sky radiances was parametrized by its curvature, which is a parameter defined as a combination of the first and second derivatives as a function of scattering angle. We demonstrate that a slope of the logarithm of curvature versus logarithm of scattering angle in this selected range of scattering angles is sensitive to cirrus cloud presence. We also demonstrate that restricting the values of the slope below some threshold value can be used for cirrus cloud screening. The threshold value of the slope was estimated using collocated measurements of AERONET data and MPLNET lidars.

  20. Constructing a Merged Cloud-Precipitation Radar Dataset for Tropical Convective Clouds during the DYNAMO/AMIE Experiment at Addu Atoll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Zhe; McFarlane, Sally A.; Schumacher, Courtney

    2014-05-16

    To improve understanding of the convective processes key to the Madden-Julian-Oscillation (MJO) initiation, the Dynamics of the MJO (DYNAMO) and Atmospheric Radiation Measurement MJO Investigation Experiment (AMIE) collected four months of observations from three radars, the S-band Polarization Radar (S-Pol), the C-band Shared Mobile Atmospheric Research & Teaching Radar (SMART-R), and Ka-band Zenith Radar (KAZR) on Addu Atoll in the tropical Indian Ocean. This study compares the measurements from the S-Pol and SMART-R to those from the more sensitive KAZR in order to characterize the hydrometeor detection capabilities of the two scanning precipitation radars. Frequency comparisons for precipitating convective cloudsmore » and non-precipitating high clouds agree much better than non-precipitating low clouds for both scanning radars due to issues in ground clutter. On average, SMART-R underestimates convective and high cloud tops by 0.3 to 1.1 km, while S-Pol underestimates cloud tops by less than 0.4 km for these cloud types. S-Pol shows excellent dynamic range in detecting various types of clouds and therefore its data are well suited for characterizing the evolution of the 3D cloud structures, complementing the profiling KAZR measurements. For detecting non-precipitating low clouds and thin cirrus clouds, KAZR remains the most reliable instrument. However, KAZR is attenuated in heavy precipitation and underestimates cloud top height due to rainfall attenuation 4.3% of the time during DYNAMO/AMIE. An empirical method to correct the KAZR cloud top heights is described, and a merged radar dataset is produced to provide improved cloud boundary estimates, microphysics and radiative heating retrievals.« less

  1. A threshold-based cloud mask for the high-resolution visible channel of Meteosat Second Generation SEVIRI

    NASA Astrophysics Data System (ADS)

    Bley, S.; Deneke, H.

    2013-10-01

    A threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the Meteosat SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low-resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures that cannot be detected by the low-resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behavior for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test data set depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as an estimate of cloud fraction. The HRV cloud mask aims for small-scale convective sub-pixel clouds that are missed by the EUMETSAT cloud mask. The major limit of the HRV cloud mask is the minimum cloud optical thickness (COT) that can be detected. This threshold COT was found to be about 0.8 over ocean and 2 over land and is highly related to the albedo of the underlying surface.

  2. Surface inspection system for carriage parts

    NASA Astrophysics Data System (ADS)

    Denkena, Berend; Acker, Wolfram

    2006-04-01

    Quality standards are very high in carriage manufacturing, due to the fact, that the visual quality impression is highly relevant for the purchase decision for the customer. In carriage parts even very small dents can be visible on the varnished and polished surface by observing reflections. The industrial demands are to detect these form errors on the unvarnished part. In order to meet the requirements, a stripe projection system for automatic recognition of waviness and form errors is introduced1. It bases on a modified stripe projection method using a high resolution line scan camera. Particular emphasis is put on achieving a short measuring time and a high resolution in depth, aiming at a reliable automatic recognition of dents and waviness of 10 μm on large curved surfaces of approximately 1 m width. The resulting point cloud needs to be filtered in order to detect dents. Therefore a spatial filtering technique is used. This works well on smoothly curved surfaces, if frequency parameters are well defined. On more complex parts like mudguards the method is restricted by the fact that frequencies near the define dent frequencies occur within the surface as well. To allow analysis of complex parts, the system is currently extended by including 3D CAD models into the process of inspection. For smoothly curved surfaces, the measuring speed of the prototype is mainly limited by the amount of light produced by the stripe projector. For complex surfaces the measuring speed is limited by the time consuming matching process. Currently, the development focuses on the improvement of the measuring speed.

  3. Supporting the Development and Adoption of Automatic Lameness Detection Systems in Dairy Cattle: Effect of System Cost and Performance on Potential Market Shares

    PubMed Central

    Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen

    2017-01-01

    Simple Summary Most prototypes of systems to automatically detect lameness in dairy cattle are still not available on the market. Estimating their potential adoption rate could support developers in defining development goals towards commercially viable and well-adopted systems. We simulated the potential market shares of such prototypes to assess the effect of altering the system cost and detection performance on the potential adoption rate. We found that system cost and lameness detection performance indeed substantially influence the potential adoption rate. In order for farmers to prefer automatic detection over current visual detection, the usefulness that farmers attach to a system with specific characteristics should be higher than that of visual detection. As such, we concluded that low system costs and high detection performances are required before automatic lameness detection systems become applicable in practice. Abstract Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system’s potential adoption rate. PMID:28991188

  4. Detection and Retrieval of Multi-Layered Cloud Properties Using Satellite Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jian-Ping; Nguyen, Louis; Khaiyer, Mandana M.

    2005-01-01

    Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.

  5. Detection and retrieval of multi-layered cloud properties using satellite data

    NASA Astrophysics Data System (ADS)

    Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jianping; Nguyen, Louis; Khaiyer, Mandana M.

    2005-10-01

    Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.

  6. Machine Learning for Knowledge Extraction from PHR Big Data.

    PubMed

    Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George

    2014-01-01

    Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.

  7. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.

  8. Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2016-06-01

    Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.

  9. Analysis of geostationary satellite-derived cloud parameters associated with environments with high ice water content

    NASA Astrophysics Data System (ADS)

    de Laat, Adrianus; Defer, Eric; Delanoë, Julien; Dezitter, Fabien; Gounou, Amanda; Grandin, Alice; Guignard, Anthony; Fokke Meirink, Jan; Moisselin, Jean-Marc; Parol, Frédéric

    2017-04-01

    We present an evaluation of the ability of passive broadband geostationary satellite measurements to detect high ice water content (IWC > 1 g m-3) as part of the European High Altitude Ice Crystals (HAIC) project for detection of upper-atmospheric high IWC, which can be a hazard for aviation. We developed a high IWC mask based on measurements of cloud properties using the Cloud Physical Properties (CPP) algorithm applied to the geostationary Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Evaluation of the high IWC mask with satellite measurements of active remote sensors of cloud properties (CLOUDSAT/CALIPSO combined in the DARDAR (raDAR-liDAR) product) reveals that the high IWC mask is capable of detecting high IWC values > 1 g m-3 in the DARDAR profiles with a probability of detection of 60-80 %. The best CPP predictors of high IWC were the condensed water path, cloud optical thickness, cloud phase, and cloud top height. The evaluation of the high IWC mask against DARDAR provided indications that the MSG-CPP high IWC mask is more sensitive to cloud ice or cloud water in the upper part of the cloud, which is relevant for aviation purposes. Biases in the CPP results were also identified, in particular a solar zenith angle (SZA) dependence that reduces the performance of the high IWC mask for SZAs > 60°. Verification statistics show that for the detection of high IWC a trade-off has to be made between better detection of high IWC scenes and more false detections, i.e., scenes identified by the high IWC mask that do not contain IWC > 1 g m-3. However, the large majority of these detections still contain IWC values between 0.1 and 1 g m-3. Comparison of the high IWC mask against results from the Rapidly Developing Thunderstorm (RDT) algorithm applied to the same geostationary SEVIRI data showed that there are similarities and differences with the high IWC mask: the RDT algorithm is very capable of detecting young/new convective cells and areas, whereas the high IWC mask appears to be better capable of detecting more mature and ageing convection as well as cirrus remnants. The lack of detailed understanding of what causes aviation hazards related to high IWC, as well as the lack of clearly defined user requirements, hampers further tuning of the high IWC mask. Future evaluation of the high IWC mask against field campaign data, as well as obtaining user feedback and user requirements from the aviation industry, should provide more information on the performance of the MSG-CPP high IWC mask and contribute to improving the practical use of the high IWC mask.

  10. Global Measurements of Optically Thin Ice Clouds Using CALIOP

    NASA Technical Reports Server (NTRS)

    Ryan, R.; Avery, M.; Tackett, J.

    2017-01-01

    Optically thin ice clouds have been shown to have a net warming effect on the globe but, because passive instruments are not sensitive to optically thin clouds, the occurrence frequency of this class of clouds is greatly underestimated in historical passive sensor cloud climatology. One major strength of CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization), onboard the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) spacecraft, is its ability to detect these thin clouds, thus filling an important missing piece in the historical data record. This poster examines the full mission of CALIPSO Level 2 data, focusing on those CALIOP retrievals identified as thin ice clouds according to the definition shown to the right. Using this definition, thin ice clouds are identified and counted globally and vertically for each season. By examining the spatial and seasonal distributions of these thin clouds we hope to gain a better understanding these thin ice clouds and how their global distribution has changed over the mission. This poster showcases when and where CALIOP detects thin ice clouds and examines a case study of the eastern pacific and the effects seen from the El Nino-Southern Oscillation (ENSO).

  11. The response of the Seasat and Magsat infrared horizon scanners to cold clouds

    NASA Technical Reports Server (NTRS)

    Bilanow, S.; Phenneger, M.

    1980-01-01

    Cold clouds over the Earth are shown to be the principal cause of pitch and roll measurement noise in flight data from the infrared horizon scanners onboard Seasat and Magsat. The observed effects of clouds on the fixed threshold horizon detection logic of the Magsat scanner and on the variable threshold detection logic of the Seasat scanner are discussed. National Oceanic and Atmospheric Administration (NOAA) Earth photographs marked with the scanner ground trace clearly confirm the relationship between measurement errors and Earth clouds. A one to one correspondence can be seen between excursion in the pitch and roll data and cloud crossings. The characteristics of the cloud-induced noise are discussed, and the response of the satellite control systems to the cloud errors is described. Changes to the horizon scanner designs that would reduce the effects of clouds are noted.

  12. The first observed cloud echoes and microphysical parameter retrievals by China's 94-GHz cloud radar

    NASA Astrophysics Data System (ADS)

    Wu, Juxiu; Wei, Ming; Hang, Xin; Zhou, Jie; Zhang, Peichang; Li, Nan

    2014-06-01

    By using the cloud echoes first successfully observed by China's indigenous 94-GHz SKY cloud radar, the macrostructure and microphysical properties of drizzling stratocumulus clouds in Anhui Province on 8 June 2013 are analyzed, and the detection capability of this cloud radar is discussed. The results are as follows. (1) The cloud radar is able to observe the time-varying macroscopic and microphysical parameters of clouds, and it can reveal the microscopic structure and small-scale changes of clouds. (2) The velocity spectral width of cloud droplets is small, but the spectral width of the cloud containing both cloud droplets and drizzle is large. When the spectral width is more than 0.4 m s-1, the radar reflectivity factor is larger (over -10 dBZ). (3) The radar's sensitivity is comparatively higher because the minimum radar reflectivity factor is about -35 dBZ in this experiment, which exceeds the threshold for detecting the linear depolarized ratio (LDR) of stratocumulus (commonly -11 to -14 dBZ; decreases with increasing turbulence). (4) After distinguishing of cloud droplets from drizzle, cloud liquid water content and particle effective radius are retrieved. The liquid water content of drizzle is lower than that of cloud droplets at the same radar reflectivity factor.

  13. Detection of nitric oxide in the dark cloud L134N

    NASA Technical Reports Server (NTRS)

    Mcgonagle, D.; Irvine, W. M.; Minh, Y. C.; Ziurys, L. M.

    1990-01-01

    The first detection of interstellar nitric oxide (NO) in a cold dark cloud, L134N is reported. Nitric oxide was observed by means of its two 2 Pi 1/2, J = 3/2 - 1/2, rotational transitions at 150.2 and 150.5 GHz, which occur because of Lambda-doubling. The inferred column density for L134N is about 5 x 10 to the 14th/sq cm toward the SO peak in that cloud. This value corresponds to a fractional abundance relative to molecular hydrogen of about 6 x 10 to the -8th and is in good agreement with predictions of quiescent cloud ion-molecule chemistry. NO was not detected toward the dark cloud TMC-1 at an upper limit of 3 x 10 to the -8th or less.

  14. Clinical experience with a computer-aided diagnosis system for automatic detection of pulmonary nodules at spiral CT of the chest

    NASA Astrophysics Data System (ADS)

    Wormanns, Dag; Fiebich, Martin; Saidi, Mustafa; Diederich, Stefan; Heindel, Walter

    2001-05-01

    The purpose of the study was to evaluate a computer aided diagnosis (CAD) workstation with automatic detection of pulmonary nodules at low-dose spiral CT in a clinical setting for early detection of lung cancer. Two radiologists in consensus reported 88 consecutive spiral CT examinations. All examinations were reviewed using a UNIX-based CAD workstation with a self-developed algorithm for automatic detection of pulmonary nodules. The algorithm was designed to detect nodules with at least 5 mm diameter. The results of automatic nodule detection were compared to the consensus reporting of two radiologists as gold standard. Additional CAD findings were regarded as nodules initially missed by the radiologists or as false positive results. A total of 153 nodules were detected with all modalities (diameter: 85 nodules <5mm, 63 nodules 5-9 mm, 5 nodules >= 10 mm). Reasons for failure of automatic nodule detection were assessed. Sensitivity of radiologists for nodules >=5 mm was 85%, sensitivity of CAD was 38%. For nodules >=5 mm without pleural contact sensitivity was 84% for radiologists at 45% for CAD. CAD detected 15 (10%) nodules not mentioned in the radiologist's report but representing real nodules, among them 10 (15%) nodules with a diameter $GREW5 mm. Reasons for nodules missed by CAD include: exclusion because of morphological features during region analysis (33%), nodule density below the detection threshold (26%), pleural contact (33%), segmentation errors (5%) and other reasons (2%). CAD improves detection of pulmonary nodules at spiral CT significantly and is a valuable second opinion in a clinical setting for lung cancer screening. Optimization of region analysis and an appropriate density threshold have a potential for further improvement of automatic nodule detection.

  15. Detection and monitoring of H2O and CO2 ice clouds on Mars

    USGS Publications Warehouse

    Bell, J.F.; Calvin, W.M.; Ockert-Bell, M. E.; Crisp, D.; Pollack, James B.; Spencer, J.

    1996-01-01

    We have developed an observational scheme for the detection and discrimination of Mars atmospheric H2O and CO2 clouds using ground-based instruments in the near infrared. We report the results of our cloud detection and characterization study using Mars near IR images obtained during the 1990 and 1993 oppositions. We focused on specific wavelengths that have the potential, based on previous laboratory studies of H2O and CO2 ices, of yielding the greatest degree of cloud detectability and compositional discriminability. We have detected and mapped absorption features at some of these wavelengths in both the northern and southern polar regions of Mars. Compositional information on the nature of these absorption features was derived from comparisons with laboratory ice spectra and with a simplified radiative transfer model of a CO2 ice cloud overlying a bright surface. Our results indicate that both H2O and CO2 ices can be detected and distinguished in the polar hood clouds. The region near 3.00 ??m is most useful for the detection of water ice clouds because there is a strong H2O ice absorption at this wavelength but only a weak CO2 ice band. The region near 3.33 ??m is most useful for the detection of CO2 ice clouds because there is a strong, relatively narrow CO2 ice band at this wavelength but only broad "continuum" H2O ice absorption. Weaker features near 2.30 ??m could arise from CO2 ice at coarse grain sizes, or surface/dust minerals. Narrow features near 2.00 ??m, which could potentially be very diagnostic of CO2 ice clouds, suffer from contamination by Mars atmospheric CO2 absorptions and are difficult to interpret because of the rather poor knowledge of surface elevation at high latitudes. These results indicate that future ground-based, Earth-orbital, and spacecraft studies over a more extended span of the seasonal cycle should yield substantial information on the style and timing of volatile transport on Mars, as well as a more detailed understanding of the role of CO2 condensation in the polar heat budget. Copyright 1996 by the American Geophysical Union.

  16. Bayesian cloud detection for MERIS, AATSR, and their combination

    NASA Astrophysics Data System (ADS)

    Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.

    2015-04-01

    A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.

  17. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  18. Point Cloud-Based Automatic Assessment of 3D Computer Animation Courseworks

    ERIC Educational Resources Information Center

    Paravati, Gianluca; Lamberti, Fabrizio; Gatteschi, Valentina; Demartini, Claudio; Montuschi, Paolo

    2017-01-01

    Computer-supported assessment tools can bring significant benefits to both students and teachers. When integrated in traditional education workflows, they may help to reduce the time required to perform the evaluation and consolidate the perception of fairness of the overall process. When integrated within on-line intelligent tutoring systems,…

  19. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  20. Comparasion of Cloud Cover restituted by POLDER and MODIS

    NASA Astrophysics Data System (ADS)

    Zeng, S.; Parol, F.; Riedi, J.; Cornet, C.; Thieuxleux, F.

    2009-04-01

    PARASOL and AQUA are two sun-synchronous orbit satellites in the queue of A-Train satellites that observe our earth within a few minutes apart from each other. Aboard these two platforms, POLDER and MODIS provide coincident observations of the cloud cover with very different characteristics. These give us a good opportunity to study the clouds system and evaluate strengths and weaknesses of each dataset in order to provide an accurate representation of global cloud cover properties. This description is indeed of outermost importance to quantify and understand the effect of clouds on global radiation budget of the earth-atmosphere system and their influence on the climate changes. We have developed a joint dataset containing both POLDER and MODIS level 2 cloud products collocated and reprojected on a common sinusoidal grid in order to make the data comparison feasible and veracious. Our foremost work focuses on the comparison of both spatial distribution and temporal variation of the global cloud cover. This simple yet critical cloud parameter need to be clearly understood to allow further comparison of the other cloud parameters. From our study, we demonstrate that on average these two sensors both detect the clouds fairly well. They provide similar spatial distributions and temporal variations:both sensors see high values of cloud amount associated with deep convection in ITCZ, over Indonesia, and in west-central Pacific Ocean warm pool region; they also provide similar high cloud cover associated to mid-latitude storm tracks, to Indian monsoon or to the stratocumulus along the west coast of continents; on the other hand small cloud amounts that typically present over subtropical oceans and deserts in subsidence aeras are well identified by both POLDER and MODIS. Each sensor has its advantages and inconveniences for the detection of a particular cloud types. With higher spatial resolution, MODIS can better detect the fractional clouds thus explaining as one part of a positive bias in any latitude and in any viewing angle with an order of 10% between the POLDER cloud amount and the so-called MODIS "combined" cloud amount. Nevertheless it is worthy to note that a negative bias of about 10% is obtained between the POLDER cloud amount and the MODIS "day-mean" cloud amount. Main differences between the two MODIS cloud amount values are known to be due to the filtering of remaining aerosols or cloud edges. due to both this high spatial resolution of MODIS and the fact that "combined" cloud amount filters cloud edges, we can also explain why appear the high positive bias regions over subtropical ocean in south hemisphere and over east Africa in summer. Thanks to several channels in the thermal infrared spectral domain, MODIS detects probably much better the thin cirrus especially over land, thus causing a general negative bias for ice clouds. The multi-spectral capability of MODIS also allows for a better detection of low clouds over snow or ice, Hence the (POLDER-MODIS) cloud amount difference is often negative over Greenland, Antarctica, and over the continents at middle-high latitudes in spring and autumn associated to the snow coverage. The multi-spectral capability of MODIS also makes the discrimination possible between the biomass burning aerosols and the fractional clouds over the continents. Thus a positive bias appears in central Africa in summer and autumn associated to important biomass burning events. Over transition region between desert and non-desert, the presence of large negative bias (POLDER-MODIS) of cloud amount maybe partly due to MODIS pixel falsely labeled the desert as cloudy, where MODIS algorithm uses static desert mask. This is clearly highlighted in south of Sahara in spring and summer where we find a bias negative with an order of -0.1. What is more, thanks to its multi-angular capability, POLDER can discriminate the sun-glint region thus minimizing the dependence of cloud amount on view angle. It makes the detection of high clouds easier over a black surface thanks to its polarization character.

  1. 3D Orbit Visualization for Earth-Observing Missions

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.

    2011-01-01

    This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.

  2. Cloud Coverage and Height Distribution from the GLAS Polar Orbiting Lidar: Comparison to Passive Cloud Retrievals

    NASA Technical Reports Server (NTRS)

    Spinhime, J. D.; Palm, S. P.; Hlavka, D. L.; Hart, W. D.; Mahesh, A.

    2004-01-01

    The Geoscience Laser Altimeter System (GLAS) began full on orbit operations in September 2003. A main application of the two-wavelength GLAS lidar is highly accurate detection and profiling of global cloud cover. Initial analysis indicates that cloud and aerosol layers are consistently detected on a global basis to cross-sections down to 10(exp -6) per meter. Images of the lidar data dramatically and accurately show the vertical structure of cloud and aerosol to the limit of signal attenuation. The GLAS lidar has made the most accurate measurement of global cloud coverage and height to date. In addition to the calibrated lidar signal, GLAS data products include multi level boundaries and optical depth of all transmissive layers. Processing includes a multi-variable separation of cloud and aerosol layers. An initial application of the data results is to compare monthly cloud means from several months of GLAS observations in 2003 to existing cloud climatologies from other satellite measurement. In some cases direct comparison to passive cloud retrievals is possible. A limitation of the lidar measurements is nadir only sampling. However monthly means exhibit reasonably good global statistics and coverage results, at other than polar regions, compare well with other measurements but show significant differences in height distribution. For polar regions where passive cloud retrievals are problematic and where orbit track density is greatest, the GLAS results are particularly an advance in cloud cover information. Direct comparison to MODIS retrievals show a better than 90% agreement in cloud detection for daytime, but less than 60% at night. Height retrievals are in much less agreement. GLAS is a part of the NASA EOS project and data products are thus openly available to the science community (see http://glo.gsfc.nasa.gov).

  3. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature.

    PubMed

    Henderson, Jette; Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-05-04

    Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. ©Jette Henderson, Junyuan Ke, Joyce C Ho, Joydeep Ghosh, Byron C Wallace. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.05.2018.

  4. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  5. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  6. Multiseasonal Tree Crown Structure Mapping with Point Clouds from OTS Quadrocopter Systems

    NASA Astrophysics Data System (ADS)

    Hese, S.; Behrendt, F.

    2017-08-01

    OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) - the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500 €. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.

  7. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  8. Detection of ground fog in mountainous areas from MODIS (Collection 051) daytime data using a statistical approach

    NASA Astrophysics Data System (ADS)

    Schulz, Hans Martin; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg

    2016-03-01

    The mountain cloud forest of Taiwan can be delimited from other forest types using a map of the ground fog frequency. In order to create such a frequency map from remotely sensed data, an algorithm able to detect ground fog is necessary. Common techniques for ground fog detection based on weather satellite data cannot be applied to fog occurrences in Taiwan as they rely on several assumptions regarding cloud properties. Therefore a new statistical method for the detection of ground fog in mountainous terrain from MODIS Collection 051 data is presented. Due to the sharpening of input data using MODIS bands 1 and 2, the method provides fog masks in a resolution of 250 m per pixel. The new technique is based on negative correlations between optical thickness and terrain height that can be observed if a cloud that is relatively plane-parallel is truncated by the terrain. A validation of the new technique using camera data has shown that the quality of fog detection is comparable to that of another modern fog detection scheme developed and validated for the temperate zones. The method is particularly applicable to optically thinner water clouds. Beyond a cloud optical thickness of ≈ 40, classification errors significantly increase.

  9. Improved simulation of aerosol, cloud, and density measurements by shuttle lidar

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.

    1981-01-01

    Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.

  10. Using Himawari-8, estimation of SO2 cloud altitude at Aso volcano eruption, on October 8, 2016

    NASA Astrophysics Data System (ADS)

    Ishii, Kensuke; Hayashi, Yuta; Shimbori, Toshiki

    2018-02-01

    It is vital to detect volcanic plumes as soon as possible for volcanic hazard mitigation such as aviation safety and the life of residents. Himawari-8, the Japan Meteorological Agency's (JMA's) geostationary meteorological satellite, has high spatial resolution and sixteen observation bands including the 8.6 μm band to detect sulfur dioxide (SO2). Therefore, Ash RGB composite images (RED: brightness temperature (BT) difference between 12.4 and 10.4 μm, GREEN: BT difference between 10.4 and 8.6 μm, BLUE: 10.4 μm) discriminate SO2 clouds and volcanic ash clouds from meteorological clouds. Since the Himawari-8 has also high temporal resolution, the real-time monitoring of ash and SO2 clouds is of great use. A phreatomagmatic eruption of Aso volcano in Kyushu, Japan, occurred at 01:46 JST on October 8, 2016. For this eruption, the Ash RGB could detect SO2 cloud from Aso volcano immediately after the eruption and track it even 12 h after. In this case, the Ash RGB images every 2.5 min could clearly detect the SO2 cloud that conventional images such as infrared and split window could not detect sufficiently. Furthermore, we could estimate the height of the SO2 cloud by comparing the Ash RGB images and simulations of the JMA Global Atmospheric Transport Model with a variety of height parameters. As a result of comparison, the top and bottom height of the SO2 cloud emitted from the eruption was estimated as 7 and 13-14 km, respectively. Assuming the plume height was 13-14 km and eruption duration was 160-220 s (as estimated by seismic observation), the total emission mass of volcanic ash from the eruption was estimated as 6.1-11.8 × 108 kg, which is relatively consistent with 6.0-6.5 × 108 kg from field survey. [Figure not available: see fulltext.

  11. Extracting Topological Relations Between Indoor Spaces from Point Clouds

    NASA Astrophysics Data System (ADS)

    Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L.

    2017-09-01

    3D models of indoor environments are essential for many application domains such as navigation guidance, emergency management and a range of indoor location-based services. The principal components defined in different BIM standards contain not only building elements, such as floors, walls and doors, but also navigable spaces and their topological relations, which are essential for path planning and navigation. We present an approach to automatically reconstruct topological relations between navigable spaces from point clouds. Three types of topological relations, namely containment, adjacency and connectivity of the spaces are modelled. The results of initial experiments demonstrate the potential of the method in supporting indoor navigation.

  12. Automated retrieval of cloud and aerosol properties from the ARM Raman lidar, part 1: feature detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorsen, Tyler J.; Fu, Qiang; Newsom, Rob K.

    A Feature detection and EXtinction retrieval (FEX) algorithm for the Atmospheric Radiation Measurement (ARM) program’s Raman lidar (RL) has been developed. Presented here is part 1 of the FEX algorithm: the detection of features including both clouds and aerosols. The approach of FEX is to use multiple quantities— scattering ratios derived using elastic and nitro-gen channel signals from two fields of view, the scattering ratio derived using only the elastic channel, and the total volume depolarization ratio— to identify features using range-dependent detection thresholds. FEX is designed to be context-sensitive with thresholds determined for each profile by calculating the expectedmore » clear-sky signal and noise. The use of multiple quantities pro-vides complementary depictions of cloud and aerosol locations and allows for consistency checks to improve the accuracy of the feature mask. The depolarization ratio is shown to be particularly effective at detecting optically-thin features containing non-spherical particles such as cirrus clouds. Improve-ments over the existing ARM RL cloud mask are shown. The performance of FEX is validated against a collocated micropulse lidar and observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite over the ARM Darwin, Australia site. While we focus on a specific lidar system, the FEX framework presented here is suitable for other Raman or high spectral resolution lidars.« less

  13. THOR: Cloud Thickness from Off beam Lidar Returns

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; McGill, Matthew; Kolasinski, John; Varnai, Tamas; Yetzer, Ken

    2004-01-01

    Conventional wisdom is that lidar pulses do not significantly penetrate clouds having optical thickness exceeding about tau = 2, and that no returns are detectable from more than a shallow skin depth. Yet optically thicker clouds of tau much greater than 2 reflect a larger fraction of visible photons, and account for much of Earth s global average albedo. As cloud layer thickness grows, an increasing fraction of reflected photons are scattered multiple times within the cloud, and return from a diffuse concentric halo that grows around the incident pulse, increasing in horizontal area with layer physical thickness. The reflected halo is largely undetected by narrow field-of-view (FoV) receivers commonly used in lidar applications. THOR - Thickness from Off-beam Returns - is an airborne wide-angle detection system with multiple FoVs, capable of observing the diffuse halo, detecting wide-angle signal from which physical thickness of optically thick clouds can be retrieved. In this paper we describe the THOR system, demonstrate that the halo signal is stronger for thicker clouds, and validate physical thickness retrievals for clouds having z > 20, from NASA P-3B flights over the Department of Energy/Atmospheric Radiation Measurement/Southern Great Plains site, using the lidar, radar and other ancillary ground-based data.

  14. Characteristics of cloud occurrence using ceilometer measurements and its relationship to precipitation over Seoul

    NASA Astrophysics Data System (ADS)

    Lee, Sanghee; Hwang, Seung-On; Kim, Jhoon; Ahn, Myoung-Hwan

    2018-03-01

    Clouds are an important component of the atmosphere that affects both climate and weather, however, their contributions can be very difficult to determine. Ceilometer measurements can provide high resolution information on atmospheric conditions such as cloud base height (CBH) and vertical frequency of cloud occurrence (CVF). This study presents the first comprehensive analysis of CBH and CVF derived using Vaisala CL51 ceilometers at two urban stations in Seoul, Korea, during a three-year period from January 2014 to December 2016. The average frequency of cloud occurrence detected by the ceilometers is 54.3%. It is found that the CL51 is better able to capture CBH as compared to another ceilometer CL31 at a nearby meteorological station because it could detect high clouds more accurately. Frequency distributions for CBH up to 13,000 m providing detailed vertical features with 500-m interval show 55% of CBHs below 2 km for aggregated CBHs. A bimodal frequency distribution was observed for three-layers CBHs. A monthly variation of CVF reveals that frequency concentration of lower clouds is found in summer and winter, and higher clouds more often detected in spring and autumn. Monthly distribution features of cloud occurrence and precipitation are depending on seasons and it might be easy to define their relationship due to higher degree of variability of precipitation than cloud occurrence. However, a fluctuation of cloud occurrence frequency in summer is similar to precipitation in trend, whereas clouds in winter are relatively frequent but precipitation is not accompanied. In addition, recent decrease of summer precipitation could be mostly explained by a decrease of cloud occurrence. Anomalous precipitation recorded sometimes is considerably related to corresponding cloud occurrence. The diurnal and daily variations of CBH and CVF from ceilometer observations and the analysis of microwave radiometer measurements for two typical cloudiness cases are also reviewed in parallel. This analysis in finer temporal scale exhibits that utilization of ground-based observations together could help to analyze the cloud behaviors.

  15. Multilayered Clouds Identification and Retrieval for CERES Using MODIS

    NASA Technical Reports Server (NTRS)

    Sun-Mack, Sunny; Minnis, Patrick; Chen, Yan; Yi, Yuhong; Huang, Jainping; Lin, Bin; Fan, Alice; Gibson, Sharon; Chang, Fu-Lung

    2006-01-01

    Traditionally, analyses of satellite data have been limited to interpreting the radiances in terms of single layer clouds. Generally, this results in significant errors in the retrieved properties for multilayered cloud systems. Two techniques for detecting overlapped clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. The first technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other method uses microwave (MWR) data. The use of BTD, the 11-12 micrometer brightness temperature difference, in conjunction with tau, the retrieved visible optical depth, was suggested by Kawamoto et al. (2001) and used by Pavlonis et al. (2004) as a means to detect multilayered clouds. Combining visible (VIS; 0.65 micrometer) and infrared (IR) retrievals of cloud properties with microwave (MW) retrievals of cloud water temperature Tw and liquid water path LWP retrieved from satellite microwave imagers appears to be a fruitful approach for detecting and retrieving overlapped clouds (Lin et al., 1998, Ho et al., 2003, Huang et al., 2005). The BTD method is limited to optically thin cirrus over low clouds, while the MWR method is limited to ocean areas only. With the availability of VIS and IR data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and MW data from the Advanced Microwave Scanning Radiometer EOS (AMSR-E), both on Aqua, it is now possible to examine both approaches simultaneously. This paper explores the use of the BTD method as applied to MODIS and AMSR-E data taken from the Aqua satellite over non-polar ocean surfaces.

  16. Electrophysiological Correlates of Automatic Visual Change Detection in School-Age Children

    ERIC Educational Resources Information Center

    Clery, Helen; Roux, Sylvie; Besle, Julien; Giard, Marie-Helene; Bruneau, Nicole; Gomot, Marie

    2012-01-01

    Automatic stimulus-change detection is usually investigated in the auditory modality by studying Mismatch Negativity (MMN). Although the change-detection process occurs in all sensory modalities, little is known about visual deviance detection, particularly regarding the development of this brain function throughout childhood. The aim of the…

  17. Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics

    NASA Astrophysics Data System (ADS)

    Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu

    2007-11-01

    In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.

  18. The EOS CERES Global Cloud Mask

    NASA Technical Reports Server (NTRS)

    Berendes, T. A.; Welch, R. M.; Trepte, Q.; Schaaf, C.; Baum, B. A.

    1996-01-01

    To detect long-term climate trends, it is essential to produce long-term and consistent data sets from a variety of different satellite platforms. With current global cloud climatology data sets, such as the International Satellite Cloud Climatology Experiment (ISCCP) or CLAVR (Clouds from Advanced Very High Resolution Radiometer), one of the first processing steps is to determine whether an imager pixel is obstructed between the satellite and the surface, i.e., determine a cloud 'mask.' A cloud mask is essential to studies monitoring changes over ocean, land, or snow-covered surfaces. As part of the Earth Observing System (EOS) program, a series of platforms will be flown beginning in 1997 with the Tropical Rainfall Measurement Mission (TRMM) and subsequently the EOS-AM and EOS-PM platforms in following years. The cloud imager on TRMM is the Visible/Infrared Sensor (VIRS), while the Moderate Resolution Imaging Spectroradiometer (MODIS) is the imager on the EOS platforms. To be useful for long term studies, a cloud masking algorithm should produce consistent results between existing (AVHRR) data, and future VIRS and MODIS data. The present work outlines both existing and proposed approaches to detecting cloud using multispectral narrowband radiance data. Clouds generally are characterized by higher albedos and lower temperatures than the underlying surface. However, there are numerous conditions when this characterization is inappropriate, most notably over snow and ice of the cloud types, cirrus, stratocumulus and cumulus are the most difficult to detect. Other problems arise when analyzing data from sun-glint areas over oceans or lakes over deserts or over regions containing numerous fires and smoke. The cloud mask effort builds upon operational experience of several groups that will now be discussed.

  19. Cloud cover detection combining high dynamic range sky images and ceilometer measurements

    NASA Astrophysics Data System (ADS)

    Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.

    2017-11-01

    This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.

  20. Infrared Sky Imager (IRSI) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Victor R.

    2016-04-01

    The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less

  1. Application of image recognition-based automatic hyphae detection in fungal keratitis.

    PubMed

    Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi

    2018-03-01

    The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.

  2. An ARM data-oriented diagnostics package to evaluate the climate model simulation

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Xie, S.

    2016-12-01

    A set of diagnostics that utilize long-term high frequency measurements from the DOE Atmospheric Radiation Measurement (ARM) program is developed for evaluating the regional simulation of clouds, radiation and precipitation in climate models. The diagnostics results are computed and visualized automatically in a python-based package that aims to serve as an easy entry point for evaluating climate simulations using the ARM data, as well as the CMIP5 multi-model simulations. Basic performance metrics are computed to measure the accuracy of mean state and variability of simulated regional climate. The evaluated physical quantities include vertical profiles of clouds, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, radiative fluxes, aerosol and cloud microphysical properties. Process-oriented diagnostics focusing on individual cloud and precipitation-related phenomena are developed for the evaluation and development of specific model physical parameterizations. Application of the ARM diagnostics package will be presented in the AGU session. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, IM release number is: LLNL-ABS-698645.

  3. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  4. Ship detection from high-resolution imagery based on land masking and cloud filtering

    NASA Astrophysics Data System (ADS)

    Jin, Tianming; Zhang, Junping

    2015-12-01

    High resolution satellite images play an important role in target detection application presently. This article focuses on the ship target detection from the high resolution panchromatic images. Taking advantage of geographic information such as the coastline vector data provided by NOAA Medium Resolution Coastline program, the land region is masked which is a main noise source in ship detection process. After that, the algorithm tries to deal with the cloud noise which appears frequently in the ocean satellite images, which is another reason for false alarm. Based on the analysis of cloud noise's feature in frequency domain, we introduce a windowed noise filter to get rid of the cloud noise. With the help of morphological processing algorithms adapted to target detection, we are able to acquire ship targets in fine shapes. In addition, we display the extracted information such as length and width of ship targets in a user-friendly way i.e. a KML file interpreted by Google Earth.

  5. Automatic polymerase chain reaction product detection system for food safety monitoring using zinc finger protein fused to luciferase.

    PubMed

    Yoshida, Wataru; Kezuka, Aki; Murakami, Yoshiyuki; Lee, Jinhee; Abe, Koichi; Motoki, Hiroaki; Matsuo, Takafumi; Shimura, Nobuaki; Noda, Mamoru; Igimi, Shizunobu; Ikebukuro, Kazunori

    2013-11-01

    An automatic polymerase chain reaction (PCR) product detection system for food safety monitoring using zinc finger (ZF) protein fused to luciferase was developed. ZF protein fused to luciferase specifically binds to target double stranded DNA sequence and has luciferase enzymatic activity. Therefore, PCR products that comprise ZF protein recognition sequence can be detected by measuring the luciferase activity of the fusion protein. We previously reported that PCR products from Legionella pneumophila and Escherichia coli (E. coli) O157 genomic DNA were detected by Zif268, a natural ZF protein, fused to luciferase. In this study, Zif268-luciferase was applied to detect the presence of Salmonella and coliforms. Moreover, an artificial zinc finger protein (B2) fused to luciferase was constructed for a Norovirus detection system. In the luciferase activity detection assay, several bound/free separation process is required. Therefore, an analyzer that automatically performed the bound/free separation process was developed to detect PCR products using the ZF-luciferase fusion protein. By means of the automatic analyzer with ZF-luciferase fusion protein, target pathogenic genomes were specifically detected in the presence of other pathogenic genomes. Moreover, we succeeded in the detection of 10 copies of E. coli BL21 without extraction of genomic DNA by the automatic analyzer and E. coli was detected with a logarithmic dependency in the range of 1.0×10 to 1.0×10(6) copies. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. [Application of automatic photography in Schistosoma japonicum miracidium hatching experiments].

    PubMed

    Ming-Li, Zhou; Ai-Ling, Cai; Xue-Feng, Wang

    2016-05-20

    To explore the value of automatic photography in the observation of results of Schistosoma japonicum miracidium hatching experiments. Some fresh S. japonicum eggs were added into cow feces, and the samples of feces were divided into a low infested experimental group and a high infested group (40 samples each group). In addition, there was a negative control group with 40 samples of cow feces without S. japonicum eggs. The conventional nylon bag S. japonicum miracidium hatching experiments were performed. The process was observed with the method of flashlight and magnifying glass combined with automatic video (automatic photography method), and, at the same time, with the naked eye observation method. The results were compared. In the low infested group, the miracidium positive detection rates were 57.5% and 85.0% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 11.723, P < 0.05). In the high infested group, the positive detection rates were 97.5% and 100% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 1.253, P > 0.05). In the two infested groups, the average positive detection rates were 77.5% and 92.5% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 6.894, P < 0.05). The automatic photography can effectively improve the positive detection rate in the S. japonicum miracidium hatching experiments.

  7. Quick multitemporal approach to get cloudless improved multispectral imagery for large geographical areas

    NASA Astrophysics Data System (ADS)

    Colaninno, Nicola; Marambio Castillo, Alejandro; Roca Cladera, Josep

    2017-10-01

    The demand for remotely sensed data is growing increasingly, due to the possibility of managing information about huge geographic areas, in digital format, at different time periods, and suitable for analysis in GIS platforms. However, primary satellite information is not such immediate as desirable. Beside geometric and atmospheric limitations, clouds, cloud shadows, and haze generally contaminate optical images. In terms of land cover, such a contamination is intended as missing information and should be replaced. Generally, image reconstruction is classified according to three main approaches, i.e. in-painting-based, multispectral-based, and multitemporal-based methods. This work relies on a multitemporal-based approach to retrieve uncontaminated pixels for an image scene. We explore an automatic method for quickly getting daytime cloudless and shadow-free image at moderate spatial resolution for large geographical areas. The process expects two main steps: a multitemporal effect adjustment to avoid significant seasonal variations, and a data reconstruction phase, based on automatic selection of uncontaminated pixels from an image stack. The result is a composite image based on middle values of the stack, over a year. The assumption is that, for specific purposes, land cover changes at a coarse scale are not significant over relatively short time periods. Because it is largely recognized that satellite imagery along tropical areas are generally strongly affected by clouds, the methodology is tested for the case study of the Dominican Republic at the year 2015; while Landsat 8 imagery are employed to test the approach.

  8. The State of Cloud-Based Biospecimen and Biobank Data Management Tools.

    PubMed

    Paul, Shonali; Gade, Aditi; Mallipeddi, Sumani

    2017-04-01

    Biobanks are critical for collecting and managing high-quality biospecimens from donors with appropriate clinical annotation. The high-quality human biospecimens and associated data are required to better understand disease processes. Therefore, biobanks have become an important and essential resource for healthcare research and drug discovery. However, collecting and managing huge volumes of data (biospecimens and associated clinical data) necessitate that biobanks use appropriate data management solutions that can keep pace with the ever-changing requirements of research. To automate biobank data management, biobanks have been investing in traditional Laboratory Information Management Systems (LIMS). However, there are a myriad of challenges faced by biobanks in acquiring traditional LIMS. Traditional LIMS are cost-intensive and often lack the flexibility to accommodate changes in data sources and workflows. Cloud technology is emerging as an alternative that provides the opportunity to small and medium-sized biobanks to automate their operations in a cost-effective manner, even without IT personnel. Cloud-based solutions offer the advantage of heightened security, rapid scalability, dynamic allocation of services, and can facilitate collaboration between different research groups by using a shared environment on a "pay-as-you-go" basis. The benefits offered by cloud technology have resulted in the development of cloud-based data management solutions as an alternative to traditional on-premise software. After evaluating the advantages offered by cloud technology, several biobanks have started adopting cloud-based tools. Cloud-based tools provide biobanks with easy access to biospecimen data for real-time sharing with clinicians. Another major benefit realized by biobanks by implementing cloud-based applications is unlimited data storage on the cloud and automatic backups for protecting any data loss in the face of natural calamities.

  9. Road traffic sign detection and classification from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng

    2016-03-01

    Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.

  10. Cloud detection algorithm comparison and validation for operational Landsat data products

    USGS Publications Warehouse

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate nonthermal-based algorithm. We give preference to CFMask for operational cloud and cloud shadow detection, as it is derived from a priori knowledge of physical phenomena and is operable without geographic restriction, making it useful for current and future land imaging missions without having to be retrained in a machine-learning environment.

  11. Automatic thermographic image defect detection of composites

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP

    2011-05-01

    Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.

  12. An interdisciplinary analysis of multispectral satellite data for selected cover types in the Colorado Mountains, using automatic data processing techniques. [hydrology, ecology, geology, vegetation, and mineral deposits

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Documentation is presented of the capability of the middle infrared portion of the electromagnetic spectrum to spectrally differentiate clouds from snow. Other portions of the spectrum cannot provide this capability.

  13. [The application of wavelet analysis of remote detection of pollution clouds].

    PubMed

    Zhang, J; Jiang, F

    2001-08-01

    The discrete wavelet transform (DWT) is used to analyse the spectra of pollution clouds in complicated environment and extract the small-features. The DWT is a time-frequency analysis technology, which detects the subtle small changes in the target spectrum. The results show that the DWT is a quite effective method to extract features of target-cloud and improve the reliability of monitoring alarm system.

  14. MPLNET V3 Cloud and Planetary Boundary Layer Detection

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Campbell, James R.; Haftings, Phillip C.

    2016-01-01

    The NASA Micropulse Lidar Network Version 3 algorithms for planetary boundary layer and cloud detection are described and differences relative to the previous Version 2 algorithms are highlighted. A year of data from the Goddard Space Flight Center site in Greenbelt, MD consisting of diurnal and seasonal trends is used to demonstrate the results. Both the planetary boundary layer and cloud algorithms show significant improvement of the previous version.

  15. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously

  16. C+ detection of warm dark gas in diffuse clouds

    NASA Astrophysics Data System (ADS)

    Langer, W. D.; Velusamy, T.; Pineda, J. L.; Goldsmith, P. F.; Li, D.; Yorke, H. W.

    2010-10-01

    We present the first results of the Herschel open time key program, Galactic Observations of Terahertz C+ (GOT C+) survey of the [CII] 2P3/2-2P1/2 fine-structure line at 1.9 THz (158 μm) using the HIFI instrument on Herschel. We detected 146 interstellar clouds along sixteen lines-of-sight towards the inner Galaxy. We also acquired HI and CO isotopologue data along each line-of-sight for analysis of the physical conditions in these clouds. Here we analyze 29 diffuse clouds (AV < 1.3 mag) in this sample characterized by having [CII] and HI emission, but no detectable CO. We find that [CII] emission is generally stronger than expected for diffuse atomic clouds, and in a number of sources is much stronger than anticipated based on their HI column density. We show that excess [CII] emission in these clouds is best explained by the presence of a significant diffuse warm H2, dark gas, component. This first [CII] 158 μm detection of warm dark gas demonstrates the value of this tracer for mapping this gas throughout the Milky Way and in galaxies. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  17. Building a semi-automatic ontology learning and construction system for geosciences

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using just HTML5 and Javascript, thereby avoiding cumbersome server side coding present in the traditional approaches. The JSON format used in MongoDB is also suitable for storing and querying RDF data. We will store the domain ontologies and associated linked data in JSON/RDF formats. Our Web interface will be built upon the open source and configurable WebProtege ontology editor. We will develop a simplified mobile version of our user interface which will automatically detect the hosting device and adjust the user interface layout to accommodate different screen sizes. We will also use the Semantic Media Wiki that allows the user to store and query the data within the wiki pages. By using HTML 5, JavaScript, and WebGL, we aim to create an interactive, dynamic, and multi-dimensional user interface that presents various geosciences data sets in a natural and intuitive way.

  18. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    PubMed

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/.

  19. CloudDOE: A User-Friendly Tool for Deploying Hadoop Clouds and Analyzing High-Throughput Sequencing Data with MapReduce

    PubMed Central

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D. T.; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Background Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. Results We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. Conclusions CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. Availability: CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/. PMID:24897343

  20. OT1_mputman_1: ASCII: All Sky observations of Galactic CII

    NASA Astrophysics Data System (ADS)

    Putman, M.

    2010-07-01

    The Milky Way and other galaxies require a significant source of ongoing star formation fuel to explain their star formation histories. A new ubiquitous population of discrete, cold clouds have recently been discovered at the disk-halo interface of our Galaxy that could potentially provide this source of fuel. We propose to observe a small sample of these disk-halo clouds with HIFI to determine if the level of [CII] emission detected suggests they represent the cooling of warm clouds at the interface between the star forming disk and halo. These cooling clouds are predicted by simulations of warm clouds moving into the disk-halo interface region. We target 5 clouds in this proposal for which we have high resolution HI maps and can observe the densest core of the cloud. The results of our observations will also be used to interpret the surprisingly high detections of [CII] for low HI column density clouds in the Galactic Plane by the GOT C+ Key Program by extending the clouds probed to high latitude environments.

  1. Sensor data fusion for textured reconstruction and virtual representation of alpine scenes

    NASA Astrophysics Data System (ADS)

    Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter

    2017-10-01

    The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.

  2. RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.

    PubMed

    D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano

    2015-01-01

    The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs.

  3. Assessing the Performance of a Machine Learning Algorithm in Identifying Bubbles in Dust Emission

    NASA Astrophysics Data System (ADS)

    Xu, Duo; Offner, Stella S. R.

    2017-12-01

    Stellar feedback created by radiation and winds from massive stars plays a significant role in both physical and chemical evolution of molecular clouds. This energy and momentum leaves an identifiable signature (“bubbles”) that affects the dynamics and structure of the cloud. Most bubble searches are performed “by eye,” which is usually time-consuming, subjective, and difficult to calibrate. Automatic classifications based on machine learning make it possible to perform systematic, quantifiable, and repeatable searches for bubbles. We employ a previously developed machine learning algorithm, Brut, and quantitatively evaluate its performance in identifying bubbles using synthetic dust observations. We adopt magnetohydrodynamics simulations, which model stellar winds launching within turbulent molecular clouds, as an input to generate synthetic images. We use a publicly available three-dimensional dust continuum Monte Carlo radiative transfer code, HYPERION, to generate synthetic images of bubbles in three Spitzer bands (4.5, 8, and 24 μm). We designate half of our synthetic bubbles as a training set, which we use to train Brut along with citizen-science data from the Milky Way Project (MWP). We then assess Brut’s accuracy using the remaining synthetic observations. We find that Brut’s performance after retraining increases significantly, and it is able to identify yellow bubbles, which are likely associated with B-type stars. Brut continues to perform well on previously identified high-score bubbles, and over 10% of the MWP bubbles are reclassified as high-confidence bubbles, which were previously marginal or ambiguous detections in the MWP data. We also investigate the influence of the size of the training set, dust model, evolutionary stage, and background noise on bubble identification.

  4. Automatic detection of typical dust devils from Mars landscape images

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Watanabe, Takeru; Okumura, Susumu; Hatanaka, Yuji

    2018-02-01

    This paper presents an improved algorithm for automatic detection of Martian dust devils that successfully extracts tiny bright dust devils and obscured large dust devils from two subtracted landscape images. These dust devils are frequently observed using visible cameras onboard landers or rovers. Nevertheless, previous research on automated detection of dust devils has not focused on these common types of dust devils, but on dust devils that appear on images to be irregularly bright and large. In this study, we detect these common dust devils automatically using two kinds of parameter sets for thresholding when binarizing subtracted images. We automatically extract dust devils from 266 images taken by the Spirit rover to evaluate our algorithm. Taking dust devils detected by visual inspection to be ground truth, the precision, recall and F-measure values are 0.77, 0.86, and 0.81, respectively.

  5. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  6. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  7. Comparison Between CCCM and CloudSat Radar-Lidar (RL) Cloud and Radiation Products

    NASA Technical Reports Server (NTRS)

    Ham, Seung-Hee; Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny

    2015-01-01

    To enhance cloud properties, LaRC and CIRA developed each combination algorithm for obtained properties from passive, active and imager in A-satellite constellation. When comparing global cloud fraction each other, LaRC-produced CERES-CALIPSO-CloudSat-MODIS (CCCM) products larger low-level cloud fraction over tropic ocean, while CIRA-produced Radar-Lidar (RL) shows larger mid-level cloud fraction for high latitude region. The reason for different low-level cloud fraction is due to different filtering method of lidar-detected cloud layers. Meanwhile difference in mid-level clouds is occurred due to different priority of cloud boundaries from lidar and radar.

  8. Using Activity-Related Behavioural Features towards More Effective Automatic Stress Detection

    PubMed Central

    Giakoumis, Dimitris; Drosou, Anastasios; Cipresso, Pietro; Tzovaras, Dimitrios; Hassapis, George; Gaggioli, Andrea; Riva, Giuseppe

    2012-01-01

    This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response) recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images) that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing. PMID:23028461

  9. SkyProbe, monitoring the absolute atmospheric transmission in the optical

    NASA Astrophysics Data System (ADS)

    Cuillandre, Jean-charles; Magnier, Eugene; Mahoney, William

    2011-03-01

    Mauna Kea is known for its pristine seeing conditions, but sky transparency can be an issue for science operations since 25% of the night are not photometric, mostly due to high-altitude cirrus. Since 2001, the original single-channel SkyProbe has gathered one exposure every minute during each observing night using a small CCD camera with a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tychos catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). A key advantage of SkyProbe over direct thermal infrared imaging detection of clouds, is that it allows an accurate absolute measurement, within 5%, of the true atmospheric absorption by clouds affecting the data being gathered by the telescope's main science instrument. This system has proven crucial for decision making in the CFHT queued service observing (QSO), representing today 80% of the telescope time: science exposures taken in non-photometric conditions are automatically registered for being re-observed later on (at 1/10th of the original exposure time per pointing in the observed filters) to ensure a proper final absolute photometric calibration. The new dual color system (simultaneous B&V bands) will allow a better characterization of the sky properties atop Mauna Kea and will enable a better detection of the thinner cirrus (absorption down to 0.02 mag., i.e. 2%). SkyProbe is operated within the Elixir pipeline, a collection of tools used for handling the CFHT CCD mosaics (CFH12K and MegaCam), from data pre-processing to astrometric and photometric calibration.

  10. SkyProbe: Real-Time Precision Monitoring in the Optical of the Absolute Atmospheric Absorption on the Telescope Science and Calibration Fields

    NASA Astrophysics Data System (ADS)

    Cuillandre, J.-C.; Magnier, E.; Sabin, D.; Mahoney, B.

    2016-05-01

    Mauna Kea is known for its pristine seeing conditions but sky transparency can be an issue for science operations since at least 25% of the observable (i.e. open dome) nights are not photometric, an effect mostly due to high-altitude cirrus. Since 2001, the original single channel SkyProbe mounted in parallel on the Canada-France-Hawaii Telescope (CFHT) has gathered one V-band exposure every minute during each observing night using a small CCD camera offering a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tycho catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). The measurement of the true atmospheric absorption is achieved within 2%, a key advantage over all-sky direct thermal infrared imaging detection of clouds. The absolute measurement of the true atmospheric absorption by clouds and particulates affecting the data being gathered by the telescope's main science instrument has proven crucial for decision making in the CFHT queued service observing (QSO) representing today all of the telescope time. Also, science exposures taken in non-photometric conditions are automatically registered for a new observation at a later date at 1/10th of the original exposure time in photometric conditions to ensure a proper final absolute photometric calibration. Photometric standards are observed only when conditions are reported as being perfectly stable by SkyProbe. The more recent dual color system (simultaneous B & V bands) will offer a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinnest cirrus (absorption down to 0.01 mag., or 1%).

  11. Phase-partitioning in mixed-phase clouds - An approach to characterize the entire vertical column

    NASA Astrophysics Data System (ADS)

    Kalesse, H.; Luke, E. P.; Seifert, P.

    2017-12-01

    The characterization of the entire vertical profile of phase-partitioning in mixed-phase clouds is a challenge which can be addressed by synergistic profiling measurements with ground-based polarization lidars and cloud radars. While lidars are sensitive to small particles and can thus detect supercooled liquid (SCL) layers, cloud radar returns are dominated by larger particles (like ice crystals). The maximum lidar observation height is determined by complete signal attenuation at a penetrated optical depth of about three. In contrast, cloud radars are able to penetrate multiple liquid layers and can thus be used to expand the identification of cloud phase to the entire vertical column beyond the lidar extinction height, if morphological features in the radar Doppler spectrum can be related to the existence of SCL. Relevant spectral signatures such as bimodalities and spectral skewness can be related to cloud phase by training a neural network appropriately in a supervised learning scheme, with lidar measurements functioning as supervisor. The neural network output (prediction of SCL location) derived using cloud radar Doppler spectra can be evaluated with several parameters such as liquid water path (LWP) detected by microwave radiometer (MWR) and (liquid) cloud base detected by ceilometer or Raman lidar. The technique has been previously tested on data from Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) instruments in Barrow, Alaska and is in this study utilized for observations from the Leipzig Aerosol and Cloud Remote Observations System (LACROS) during the Analysis of the Composition of Clouds with Extended Polarization Techniques (ACCEPT) field experiment in Cabauw, Netherlands in Fall 2014. Comparisons to supercooled-liquid layers as classified by CLOUDNET are provided.

  12. Optical property retrievals of subvisual cirrus clouds from OSIRIS limb-scatter measurements

    NASA Astrophysics Data System (ADS)

    Wiensz, J. T.; Degenstein, D. A.; Lloyd, N. D.; Bourassa, A. E.

    2012-08-01

    We present a technique for retrieving the optical properties of subvisual cirrus clouds detected by OSIRIS, a limb-viewing satellite instrument that measures scattered radiances from the UV to the near-IR. The measurement set is composed of a ratio of limb radiance profiles at two wavelengths that indicates the presence of cloud-scattering regions. Optical properties from an in-situ database are used to simulate scattering by cloud-particles. With appropriate configurations discussed in this paper, the SASKTRAN successive-orders of scatter radiative transfer model is able to simulate accurately the in-cloud radiances from OSIRIS. Configured in this way, the model is used with a multiplicative algebraic reconstruction technique (MART) to retrieve the cloud extinction profile for an assumed effective cloud particle size. The sensitivity of these retrievals to key auxiliary model parameters is shown, and it is demonstrated that the retrieved extinction profile models accurately the measured in-cloud radiances from OSIRIS. Since OSIRIS has an 11-yr record of subvisual cirrus cloud detections, the work described in this manuscript provides a very useful method for providing a long-term global record of the properties of these clouds.

  13. [Application of single-band brightness variance ratio to the interference dissociation of cloud for satellite data].

    PubMed

    Qu, Wei-ping; Liu, Wen-qing; Liu, Jian-guo; Lu, Yi-huai; Zhu, Jun; Qin, Min; Liu, Cheng

    2006-11-01

    In satellite remote-sensing detection, cloud as an interference plays a negative role in data retrieval. How to discern the cloud fields with high fidelity thus comes as a need to the following research. A new method rooting in atmospheric radiation characteristics of cloud layer, in the present paper, presents a sort of solution where single-band brightness variance ratio is used to detect the relative intensity of cloud clutter so as to delineate cloud field rapidly and exactly, and the formulae of brightness variance ratio of satellite image, image reflectance variance ratio, and brightness temperature variance ratio of thermal infrared image are also given to enable cloud elimination to produce data free from cloud interference. According to the variance of the penetrating capability for different spectra bands, an objective evaluation is done on cloud penetration of them with the factors that influence penetration effect. Finally, a multi-band data fusion task is completed using the image data of infrared penetration from cirrus nothus. Image data reconstruction is of good quality and exactitude to show the real data of visible band covered by cloud fields. Statistics indicates the consistency of waveband relativity with image data after the data fusion.

  14. Automatic identification of artifacts in electrodermal activity data.

    PubMed

    Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind

    2015-01-01

    Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.

  15. Comparison of Monthly Mean Cloud Fraction and Cloud Optical depth Determined from Surface Cloud Radar, TOVS, AVHRR, and MODIS over Barrow, Alaska

    NASA Technical Reports Server (NTRS)

    Uttal, Taneil; Frisch, Shelby; Wang, Xuan-Ji; Key, Jeff; Schweiger, Axel; Sun-Mack, Sunny; Minnis, Patrick

    2005-01-01

    A one year comparison is made of mean monthly values of cloud fraction and cloud optical depth over Barrow, Alaska (71 deg 19.378 min North, 156 deg 36.934 min West) between 35 GHz radar-based retrievals, the TOVS Pathfinder Path-P product, the AVHRR APP-X product, and a MODIS based cloud retrieval product from the CERES-Team. The data sets represent largely disparate spatial and temporal scales, however, in this paper, the focus is to provide a preliminary analysis of how the mean monthly values derived from these different data sets compare, and determine how they can best be used separately, and in combination to provide reliable estimates of long-term trends of changing cloud properties. The radar and satellite data sets described here incorporate Arctic specific modifications that account for cloud detection challenges specific to the Arctic environment. The year 2000 was chosen for this initial comparison because the cloud radar data was particularly continuous and reliable that year, and all of the satellite retrievals of interest were also available for the year 2000. Cloud fraction was chosen as a comparison variable as accurate detection of cloud is the primary product that is necessary for any other cloud property retrievals. Cloud optical depth was additionally selected as it is likely the single cloud property that is most closely correlated to cloud influences on surface radiation budgets.

  16. Observations of temporal change of nighttime cloud cover from Himawari 8 and ground-based sky camera over Chiba, Japan

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Gacal, G. F. B.; Kuze, H.

    2017-12-01

    Detection of nighttime cloud from Himawari 8 is implemented using the difference of digital numbers from bands 13 (10.4µm) and 7 (3.9µm). The digital number difference of -1.39x104 can be used as a threshold to separate clouds from clear sky conditions. To look at observations from the ground over Chiba, a digital camera (Canon Powershot A2300) is used to take images of the sky every 5 minutes at an exposure time of 5s at the Center for Environmental Remote Sensing, Chiba University. From these images, cloud cover values are obtained using threshold algorithm (Gacal, et al, 2016). Ten minute nighttime cloud cover values from these two datasets are compared and analyzed from 29 May to 05 June 2017 (20:00-03:00 JST). When compared with lidar data, the camera can detect thick high level clouds up to 10km. The results show that during clear sky conditions (02-03 June), both camera and satellite cloud cover values show 0% cloud cover. During cloudy conditions (05-06 June), the camera shows almost 100% cloud cover while satellite cloud cover values range from 60 to 100%. These low values can be attributed to the presence of low-level thin clouds ( 2km above the ground) as observed from National Institute for Environmental Studies lidar located inside Chiba University. This difference of cloud cover values shows that the camera can produce accurate cloud cover values of low level clouds that are sometimes not detected by satellites. The opposite occurs when high level clouds are present (01-02 June). Derived satellite cloud cover shows almost 100% during the whole night while ground-based camera shows cloud cover values that range from 10 to 100% during the same time interval. The fluctuating values can be attributed to the presence of thin clouds located at around 6km from the ground and the presence of low level clouds ( 1km). Since the camera relies on the reflected city lights, it is possible that the high level thin clouds are not observed by the camera but is observed by the satellite. Also, this condition constitutes layers of clouds that are not observed by each camera. The results of this study show that one instrument can be used to correct each other to provide better cloud cover values. These corrections is dependent on the height and thickness of the clouds. No correction is necessary when the sky is clear.

  17. Introduction and analysis of several FY3C-MWHTS cloud/rain screening methods

    NASA Astrophysics Data System (ADS)

    Li, Xiaoqing

    2017-04-01

    Data assimilation of satellite microwave sounders are very important for numerical weather prediction. Fengyun-3C (FY-3C),launched in September, 2013, has two such sounders: MWTS (MicroWave Temperature Sounder) and MWHTS (MicroWave Humidity and Temperature Sounder). These data should be quality-controlled before assimilation and cloud/rain detection is one of the crucial steps. This paper introduced different cloud/rain detection methods based on MWHTS, VIRR (Visible and InfraRed Radiometer) and MWRI (Microwave Radiation Imager) observations. We designed 6 cloud/rain detection combinations and then analyzed the application effect of these schemes. The difference between observations and model simulations for FY-3C MWHTS channels were calculated as a parameter for analysis. Both RTTOV and CRTM were used to fast simulate radiances of MWHTS channels.

  18. Volcanic eruption detection with TOMS

    NASA Technical Reports Server (NTRS)

    Krueger, Arlin J.

    1987-01-01

    The Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) is designed for mapping of the atmospheric ozone distribution. Absorption by sulfur dioxide at the same ultraviolet spectral wavelengths makes it possible to observe and resolve the size of volcanic clouds. The sulfur dioxide absorption is discriminated from ozone and water clouds in the data processing by their spectral signatures. Thus, the sulfur dioxide can serve as a tracer which appears in volcanic eruption clouds because it is not present in other clouds. The detection limit with TOMS is close to the theoretical limit due to telemetry signal quantization of 1000 metric tons (5-sigma threshold) within the instrument field of view (50 by 50 km near the nadir). Requirements concerning the use of TOMS in detection of eruptions, geochemical cycles, and volcanic climatic effects are discussed.

  19. Comparison between SAGE II and ISCCP high-level clouds. 1: Global and zonal mean cloud amounts

    NASA Technical Reports Server (NTRS)

    Liao, Xiaohan; Rossow, William B.; Rind, David

    1995-01-01

    Global high-level clouds identified in Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation measurements for January and July in the period 1985 to 1990 are compared with near-nadir-looking observations from the International Satellite Cloud Climatology Project (ISCCP). Global and zonal mean high-level cloud amounts from the two data sets agree very well, if clouds with layer extinction coefficients of less than 0.008/km at 1.02 micrometers wavelength are removed from the SAGE II results and all detected clouds are interpreted to have an average horizontal size of about 75 km along the 200 km transimission path length of the SAGE II observations. The SAGE II results are much more sensitive to variations of assumed cloud size than to variations of detection threshold. The geographical distribution of cloud fractions shows good agreement, but systematic regional differences also indicate that the average cloud size varies somewhat among different climate regimes. The more sensitive SAGE II results show that about one third of all high-level clouds are missed by ISCCP but that these clouds have very low optical thicknesses (less than 0.1 at 0.6 micrometers wavelength). SAGE II sampling error in monthly zonal cloud fraction is shown to produce no bias, to be less than the intraseasonal natural variability, but to be comparable with the natural variability at longer time scales.

  20. Terrestrial laser scanning for geometry extraction and change monitoring of rubble mound breakwaters

    NASA Astrophysics Data System (ADS)

    Puente, I.; Lindenbergh, R.; González-Jorge, H.; Arias, P.

    2014-05-01

    Rubble mound breakwaters are coastal defense structures that protect harbors and beaches from the impacts of both littoral drift and storm waves. They occasionally break, leading to catastrophic damage to surrounding human populations and resulting in huge economic and environmental losses. Ensuring their stability is considered to be of vital importance and the major reason for setting up breakwater monitoring systems. Terrestrial laser scanning has been recognized as a monitoring technique of existing infrastructures. Its capability for measuring large amounts of accurate points in a short period of time is also well proven. In this paper we first introduce a method for the automatic extraction of face geometry of concrete cubic blocks, as typically used in breakwaters. Point clouds are segmented based on their orientation and location. Then we compare corresponding cuboids of three co-registered point clouds to estimate their transformation parameters over time. The first method is demonstrated on scan data from the Baiona breakwater (Spain) while the change detection is demonstrated on repeated scan data of concrete bricks, where the changing scenario was simulated. The application of the presented methodology has verified its effectiveness for outlining the 3D breakwater units and analyzing their changes at the millimeter level. Breakwater management activities could benefit from this initial version of the method in order to improve their productivity.

  1. Low-Frequency Carbon Recombination Lines in the Orion Molecular Cloud Complex

    NASA Astrophysics Data System (ADS)

    Tremblay, Chenoa D.; Jordan, Christopher H.; Cunningham, Maria; Jones, Paul A.; Hurley-Walker, Natasha

    2018-05-01

    We detail tentative detections of low-frequency carbon radio recombination lines from within the Orion molecular cloud complex observed at 99-129 MHz. These tentative detections include one alpha transition and one beta transition over three locations and are located within the diffuse regions of dust observed in the infrared at 100 μm, the Hα emission detected in the optical, and the synchrotron radiation observed in the radio. With these observations, we are able to study the radiation mechanism transition from collisionally pumped to radiatively pumped within the H ii regions within the Orion molecular cloud complex.

  2. Newly detected molecules in dense interstellar clouds

    NASA Astrophysics Data System (ADS)

    Irvine, William M.; Avery, L. W.; Friberg, P.; Matthews, H. E.; Ziurys, L. M.

    Several new interstellar molecules have been identified including C2S, C3S, C5H, C6H and (probably) HC2CHO in the cold, dark cloud TMC-1; and the discovery of the first interstellar phosphorus-containing molecule, PN, in the Orion "plateau" source. Further results include the observations of 13C3H2 and C3HD, and the first detection of HCOOH (formic acid) in a cold cloud.

  3. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  4. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  5. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  6. Diurnal cycle and seasonal variation of cloud cover over the Tibetan Plateau as determined from Himawari-8 new-generation geostationary satellite data.

    PubMed

    Shang, Huazhe; Letu, Husi; Nakajima, Takashi Y; Wang, Ziming; Ma, Run; Wang, Tianxing; Lei, Yonghui; Ji, Dabin; Li, Shenshen; Shi, Jiancheng

    2018-01-18

    Analysis of cloud cover and its diurnal variation over the Tibetan Plateau (TP) is highly reliant on satellite data; however, the accuracy of cloud detection from both polar-orbiting and geostationary satellites over this area remains unclear. The new-generation geostationary Himawari-8 satellites provide high-resolution spatial and temporal information about clouds over the Tibetan Plateau. In this study, the cloud detection of MODIS and AHI is investigated and validated against CALIPSO measurements. For AHI and MODIS, the false alarm rate of AHI and MODIS in cloud identification over the TP was 7.51% and 1.94%, respectively, and the cloud hit rate was 73.55% and 80.15%, respectively. Using hourly cloud-cover data from the Himawari-8 satellites, we found that at the monthly scale, the diurnal cycle in cloud cover over the TP tends to increase throughout the day, with the minimum and maximum cloud fractions occurring at 10:00 a.m. and 18:00 p.m. local time. Due to the limited time resolution of polar-orbiting satellites, the underestimation of MODIS daytime average cloud cover is approximately 4.00% at the annual scale, with larger biases during the spring (5.40%) and winter (5.90%).

  7. A composite large-scale CO survey at high galactic latitudes in the second quadrant

    NASA Technical Reports Server (NTRS)

    Heithausen, A.; Stacy, J. G.; De Vries, H. W.; Mebold, U.; Thaddeus, P.

    1993-01-01

    Surveys undertaken in the 2nd quadrant of the Galaxy with the CfA 1.2 m telescope have been combined to produce a map covering about 620 sq deg in the 2.6 mm CO(J = 1 - 0) line at high galactic latitudes. There is CO emission from molecular 'cirrus' clouds in about 13 percent of the region surveyed. The CO clouds are grouped together into three major cloud complexes with 29 individual members. All clouds are associated with infrared emission at 100 micron, although there is no one-to-one correlation between the corresponding intensities. CO emission is detected in all bright and dark Lynds' nebulae cataloged in that region; however not all CO clouds are visible on optical photographs as reflection or absorption features. The clouds are probably local. At an adopted distance of 240 pc cloud sizes range from O.1 to 30 pc and cloud masses from 1 to 1600 solar masses. The molecular cirrus clouds contribute between 0.4 and 0.8 M solar mass/sq pc to the surface density of molecular gas in the galactic plane. Only 26 percent of the 'infrared-excess clouds' in the area surveyed actually show CO and about 2/3 of the clouds detected in CO do not show an infrared excess.

  8. SkyProbeBV: dual-color absolute sky transparency monitor to optimize science operations

    NASA Astrophysics Data System (ADS)

    Cuillandre, Jean-Charles; Magnier, Eugene; Sabin, Dan; Mahoney, Billy

    2008-07-01

    Mauna Kea (4200 m elevation, Hawaii) is known for its pristine seeing conditions, but sky transparency can be an issue for science operations: 25% of the nights are not photometric, a cloud coverage mostly due to high-altitude thin cirrus. The Canada-France-Hawaii Telescope (CFHT) is upgrading its real-time sky transparency monitor in the optical domain (V-band) into a dual-color system by adding a B-band channel and redesigning the entire optical and mechanical assembly. Since 2000, the original single-channel SkyProbe has gathered one exposure every minute during each observing night using a small CCD camera with a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (30 seconds) to capture at least 100 stars of Hipparcos' Tychos catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). A key advantage of SkyProbe over direct thermal infrared imaging detection of clouds, is that it allows an accurate absolute measurement, within 5%, of the true atmospheric absorption by clouds affecting the data being gathered by the telescope's main science instrument. This system has proven crucial for decision making in the CFHT queued service observing (QSO), representing today 95% of the telescope time: science exposures taken in non-photometric conditions are automatically registered for being re-observed later on (at 1/10th of the original exposure time per pointing in the observed filters) to ensure a proper final absolute photometric calibration. If the absorption is too high, exposures can be repeated, or the observing can be done for a lower ranked science program. The new dual color system (simultaneous B & V bands) will allow a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinner cirrus (absorption down to 0.02 mag., i.e. 2%). SkyProbe is operated within the Elixir pipeline, a collection of tools used for handling the CFHT CCD mosaics (CFH12K and MegaCam), from data pre-processing to astrometric and photometric calibration.

  9. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound.

    PubMed

    Jayatilake, Dushyantha; Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%-100%) and a recall of 93.9% (range: 72.7%-100%) for the 71 episodes of dry swallows.

  10. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound

    PubMed Central

    Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%–100%) and a recall of 93.9% (range: 72.7%–100%) for the 71 episodes of dry swallows. PMID:27170905

  11. Farm-specific economic value of automatic lameness detection systems in dairy cattle: From concepts to operational simulations.

    PubMed

    Van De Gucht, Tim; Saeys, Wouter; Van Meensel, Jef; Van Nuffel, Annelies; Vangeyte, Jurgen; Lauwers, Ludwig

    2018-01-01

    Although prototypes of automatic lameness detection systems for dairy cattle exist, information about their economic value is lacking. In this paper, a conceptual and operational framework for simulating the farm-specific economic value of automatic lameness detection systems was developed and tested on 4 system types: walkover pressure plates, walkover pressure mats, camera systems, and accelerometers. The conceptual framework maps essential factors that determine economic value (e.g., lameness prevalence, incidence and duration, lameness costs, detection performance, and their relationships). The operational simulation model links treatment costs and avoided losses with detection results and farm-specific information, such as herd size and lameness status. Results show that detection performance, herd size, discount rate, and system lifespan have a large influence on economic value. In addition, lameness prevalence influences the economic value, stressing the importance of an adequate prior estimation of the on-farm prevalence. The simulations provide first estimates for the upper limits for purchase prices of automatic detection systems. The framework allowed for identification of knowledge gaps obstructing more accurate economic value estimation. These include insights in cost reductions due to early detection and treatment, and links between specific lameness causes and their related losses. Because this model provides insight in the trade-offs between automatic detection systems' performance and investment price, it is a valuable tool to guide future research and developments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Radiative transfer model for aerosols at infrared wavelengths for passive remote sensing applications: revisited.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Embury, Janon F

    2008-11-01

    We introduced a two-dimensional radiative transfer model for aerosols in the thermal infrared [Appl. Opt.45, 6860-6875 (2006)APOPAI0003-693510.1364/AO.45.006860]. In that paper we superimposed two orthogonal plane-parallel layers to compute the radiance due to a two-dimensional (2D) rectangular aerosol cloud. In this paper we revisit the model and correct an error in the interaction of the two layers. We derive new expressions relating to the signal content of the radiance from an aerosol cloud based on the concept of five directional thermal contrasts: four for the 2D diffuse radiance and one for direct radiance along the line of sight. The new expressions give additional insight on the radiative transfer processes within the cloud. Simulations for Bacillus subtilis var. niger (BG) bioaerosol and dustlike kaolin aerosol clouds are compared and contrasted for two geometries: an airborne sensor looking down and a ground-based sensor looking up. Simulation results suggest that aerosol cloud detection from an airborne platform may be more challenging than for a ground-based sensor and that the detection of an aerosol cloud in emission mode (negative direct thermal contrast) is not the same as the detection of an aerosol cloud in absorption mode (positive direct thermal contrast).

  13. Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies.

    NASA Astrophysics Data System (ADS)

    Ferraz, A.; Painter, T. H.; Saatchi, S.; Bormann, K. J.

    2016-12-01

    Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies The NASA Jet Propulsion Laboratory developed the Airborne Snow Observatory (ASO), a coupled scanning lidar system and imaging spectrometer, to quantify the spatial distribution of snow volume and dynamics over mountains watersheds (Painter et al., 2015). To do this, ASO weekly over-flights mountainous areas during snowfall and snowmelt seasons. In addition, there are additional flights in snow-off conditions to calculate Digital Terrain Models (DTM). In this study, we focus on the reliability of ASO lidar data to characterize the 3D forest vegetation structure. The density of a single point cloud acquisition is of nearly 1 pt/m2, which is not optimal to properly characterize vegetation. However, ASO covers a given study site up to 14 times a year that enables computing a high-resolution point cloud by merging single acquisitions. In this study, we present a method to automatically register ASO multi-temporal lidar 3D point clouds. Although flight specifications do not change between acquisition dates, lidar datasets might have significant planimetric shifts due to inaccuracies in platform trajectory estimation introduced by the GPS system and drifts of the IMU. There are a large number of methodologies that address the problem of 3D data registration (Gressin et al., 2013). Briefly, they look for common primitive features in both datasets such as buildings corners, structures like electric poles, DTM breaklines or deformations. However, they are not suited for our experiment. First, single acquisition point clouds have low density that makes the extraction of primitive features difficult. Second, the landscape significantly changes between flights due to snowfall and snowmelt. Therefore, we developed a method to automatically register point clouds using tree apexes as keypoints because they are features that are supposed to experience little change during winter season. We applied the method to 14 lidar datasets (12 snow-on and 2 snow-off) acquired over the Tuolumne River Basin (California) in the year of 2014. To assess the reliability of the merged point cloud, we analyze the quality of vegetation related products such as canopy height models (CHM) and vertical vegetation profiles.

  14. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays

    NASA Astrophysics Data System (ADS)

    Forster, Linda; Seefeldner, Meinhard; Wiegner, Matthias; Mayer, Bernhard

    2017-07-01

    Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs) indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques) field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT) provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.

  15. Temporally rendered automatic cloud extraction (TRACE) system

    NASA Astrophysics Data System (ADS)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  16. Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.

    PubMed

    Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia

    2018-01-01

    It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.

  17. Designing and Implementing a Retrospective Earthquake Detection Framework at the U.S. Geological Survey National Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Patton, J.; Yeck, W.; Benz, H.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.

  18. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  19. DSCOVR/EPIC observations of SO2 reveal dynamics of young volcanic eruption clouds

    NASA Astrophysics Data System (ADS)

    Carn, S. A.; Krotkov, N. A.; Taylor, S.; Fisher, B. L.; Li, C.; Bhartia, P. K.; Prata, F. J.

    2017-12-01

    Volcanic emissions of sulfur dioxide (SO2) and ash have been measured by ultraviolet (UV) and infrared (IR) sensors on US and European polar-orbiting satellites since the late 1970s. Although successful, the main limitation of these observations from low Earth orbit (LEO) is poor temporal resolution (once per day at low latitudes). Furthermore, most currently operational geostationary satellites cannot detect SO2, a key tracer of volcanic plumes, limiting our ability to elucidate processes in fresh, rapidly evolving volcanic eruption clouds. In 2015, the launch of the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) provided the first opportunity to observe volcanic clouds from the L1 Lagrange point. EPIC is a 10-band spectroradiometer spanning UV to near-IR wavelengths with two UV channels sensitive to SO2, and a ground resolution of 25 km. The unique L1 vantage point provides continuous observations of the sunlit Earth disk, from sunrise to sunset, offering multiple daily observations of volcanic SO2 and ash clouds in the EPIC field of view. When coupled with complementary retrievals from polar-orbiting UV and IR sensors such as the Ozone Monitoring Instrument (OMI), the Ozone Mapping and Profiler Suite (OMPS), and the Atmospheric Infrared Sounder (AIRS), we demonstrate how the increased observation frequency afforded by DSCOVR/EPIC permits more timely volcanic eruption detection and novel analyses of the temporal evolution of volcanic clouds. Although EPIC has detected several mid- to high-latitude volcanic eruptions since launch, we focus on recent eruptions of Bogoslof volcano (Aleutian Islands, AK, USA). A series of EPIC exposures from May 28-29, 2017, uniquely captures the evolution of SO2 mass in a young Bogoslof eruption cloud, showing separation of SO2- and ice-rich regions of the cloud. We show how analyses of these sequences of EPIC SO2 data can elucidate poorly understood processes in transient eruption clouds, such as the relative roles of H2S oxidation and ice scavenging in modifying volcanic SO2 emissions. Detection of these relatively small events also proves EPIC's ability to provide timely detection of volcanic clouds in the upper troposphere and lower stratosphere.

  20. Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8

    NASA Astrophysics Data System (ADS)

    Joshi, P.

    2015-12-01

    Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.

Top