Sample records for cloud classification system

  1. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  2. An automated cirrus classification

    NASA Astrophysics Data System (ADS)

    Gryspeerdt, Edward; Quaas, Johannes; Sourdeval, Odran; Goren, Tom

    2017-04-01

    Cirrus clouds play an important role in determining the radiation budget of the earth, but our understanding of the lifecycle and controls on cirrus clouds remains incomplete. Cirrus clouds can have very different properties and development depending on their environment, particularly during their formation. However, the relevant factors often cannot be distinguished using commonly retrieved satellite data products (such as cloud optical depth). In particular, the initial cloud phase has been identified as an important factor in cloud development, but although back-trajectory based methods can provide information on the initial cloud phase, they are computationally expensive and depend on the cloud parametrisations used in re-analysis products. In this work, a classification system (Identification and Classification of Cirrus, IC-CIR) is introduced. Using re-analysis and satellite data, cirrus clouds are separated in four main types: frontal, convective, orographic and in-situ. The properties of these classes show that this classification is able to provide useful information on the properties and initial phase of cirrus clouds, information that could not be provided by instantaneous satellite retrieved cloud properties alone. This classification is designed to be easily implemented in global climate models, helping to improve future comparisons between observations and models and reducing the uncertainty in cirrus clouds properties, leading to improved cloud parametrisations.

  3. Standoff detection of bioaerosols over wide area using a newly developed sensor combining a cloud mapper and a spectrometric LIF lidar

    NASA Astrophysics Data System (ADS)

    Buteau, Sylvie; Simard, Jean-Robert; Roy, Gilles; Lahaie, Pierre; Nadeau, Denis; Mathieu, Pierre

    2013-10-01

    A standoff sensor called BioSense was developed to demonstrate the capacity to map, track and classify bioaerosol clouds from a distant range and over wide area. The concept of the system is based on a two steps dynamic surveillance: 1) cloud detection using an infrared (IR) scanning cloud mapper and 2) cloud classification based on a staring ultraviolet (UV) Laser Induced Fluorescence (LIF) interrogation. The system can be operated either in an automatic surveillance mode or using manual intervention. The automatic surveillance operation includes several steps: mission planning, sensor deployment, background monitoring, surveillance, cloud detection, classification and finally alarm generation based on the classification result. One of the main challenges is the classification step which relies on a spectrally resolved UV LIF signature library. The construction of this library relies currently on in-chamber releases of various materials that are simultaneously characterized with the standoff sensor and referenced with point sensors such as Aerodynamic Particle Sizer® (APS). The system was tested at three different locations in order to evaluate its capacity to operate in diverse types of surroundings and various environmental conditions. The system showed generally good performances even though the troubleshooting of the system was not completed before initiating the Test and Evaluation (T&E) process. The standoff system performances appeared to be highly dependent on the type of challenges, on the climatic conditions and on the period of day. The real-time results combined with the experience acquired during the 2012 T & E allowed to identify future ameliorations and investigation avenues.

  4. Pattern recognition of satellite cloud imagery for improved weather prediction

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.

    1986-01-01

    The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.

  5. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  6. Cloud Type Classification (cldtype) Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, Donna; Shi, Yan; Lim, K-S

    The Cloud Type (cldtype) value-added product (VAP) provides an automated cloud type classification based on macrophysical quantities derived from vertically pointing lidar and radar. Up to 10 layers of clouds are classified into seven cloud types based on predetermined and site-specific thresholds of cloud top, base and thickness. Examples of thresholds for selected U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility sites are provided in Tables 1 and 2. Inputs for the cldtype VAP include lidar and radar cloud boundaries obtained from the Active Remotely Sensed Cloud Location (ARSCL) and Surface Meteorological Systems (MET) data. Rainmore » rates from MET are used to determine when radar signal attenuation precludes accurate cloud detection. Temporal resolution and vertical resolution for cldtype are 1 minute and 30 m respectively and match the resolution of ARSCL. The cldtype classification is an initial step for further categorization of clouds. It was developed for use by the Shallow Cumulus VAP to identify potential periods of interest to the LASSO model and is intended to find clouds of interest for a variety of users.« less

  7. An automated cirrus classification

    NASA Astrophysics Data System (ADS)

    Gryspeerdt, Edward; Quaas, Johannes; Goren, Tom; Klocke, Daniel; Brueck, Matthias

    2018-05-01

    Cirrus clouds play an important role in determining the radiation budget of the earth, but many of their properties remain uncertain, particularly their response to aerosol variations and to warming. Part of the reason for this uncertainty is the dependence of cirrus cloud properties on the cloud formation mechanism, which itself is strongly dependent on the local meteorological conditions. In this work, a classification system (Identification and Classification of Cirrus or IC-CIR) is introduced to identify cirrus clouds by the cloud formation mechanism. Using reanalysis and satellite data, cirrus clouds are separated into four main types: orographic, frontal, convective and synoptic. Through a comparison to convection-permitting model simulations and back-trajectory-based analysis, it is shown that these observation-based regimes can provide extra information on the cloud-scale updraughts and the frequency of occurrence of liquid-origin ice, with the convective regime having higher updraughts and a greater occurrence of liquid-origin ice compared to the synoptic regimes. Despite having different cloud formation mechanisms, the radiative properties of the regimes are not distinct, indicating that retrieved cloud properties alone are insufficient to completely describe them. This classification is designed to be easily implemented in GCMs, helping improve future model-observation comparisons and leading to improved parametrisations of cirrus cloud processes.

  8. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection.

    PubMed

    Ahn, Junho; Han, Richard

    2016-05-23

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users' daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period.

  9. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection

    PubMed Central

    Ahn, Junho; Han, Richard

    2016-01-01

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users’ daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period. PMID:27223292

  10. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation.

    PubMed

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-12-16

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  11. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    PubMed Central

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  12. The problem of regime summaries of the data from radar observations. [for cloud system identification

    NASA Technical Reports Server (NTRS)

    Divinskaya, B. S.; Salman, Y. M.

    1975-01-01

    Peculiarities of the radar information about clouds are examined in comparison with visual data. An objective radar classification is presented and the relation of it to the meteorological classification is shown. The advisability of storage and summarization of the primary radar data for regime purposes is substantiated.

  13. An Improved Cloud Classification Algorithm for China’s FY-2C Multi-Channel Images Using Artificial Neural Network

    PubMed Central

    Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang

    2009-01-01

    The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China’s first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3–11.3 μm; IR2, 11.5–12.5 μm and WV 6.3–7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products. PMID:22346714

  14. An Improved Cloud Classification Algorithm for China's FY-2C Multi-Channel Images Using Artificial Neural Network.

    PubMed

    Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang

    2009-01-01

    The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China's first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3-11.3 μm; IR2, 11.5-12.5 μm and WV 6.3-7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products.

  15. PROCAMS - A second generation multispectral-multitemporal data processing system for agricultural mensuration

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Nalepka, R. F.

    1976-01-01

    PROCAMS (Prototype Classification and Mensuration System) has been designed for the classification and mensuration of agricultural crops (specifically small grains including wheat, rye, oats, and barley) through the use of data provided by Landsat. The system includes signature extension as a major feature and incorporates multitemporal as well as early season unitemporal approaches for using multiple training sites. Also addressed are partial cloud cover and cloud shadows, bad data points and lines, as well as changing sun angle and atmospheric state variations.

  16. Cloud based intelligent system for delivering health care as a service.

    PubMed

    Kaur, Pankaj Deep; Chana, Inderveer

    2014-01-01

    The promising potential of cloud computing and its convergence with technologies such as mobile computing, wireless networks, sensor technologies allows for creation and delivery of newer type of cloud services. In this paper, we advocate the use of cloud computing for the creation and management of cloud based health care services. As a representative case study, we design a Cloud Based Intelligent Health Care Service (CBIHCS) that performs real time monitoring of user health data for diagnosis of chronic illness such as diabetes. Advance body sensor components are utilized to gather user specific health data and store in cloud based storage repositories for subsequent analysis and classification. In addition, infrastructure level mechanisms are proposed to provide dynamic resource elasticity for CBIHCS. Experimental results demonstrate that classification accuracy of 92.59% is achieved with our prototype system and the predicted patterns of CPU usage offer better opportunities for adaptive resource elasticity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Global Single and Multiple Cloud Classification with a Fuzzy Logic Expert System

    NASA Technical Reports Server (NTRS)

    Welch, Ronald M.; Tovinkere, Vasanth; Titlow, James; Baum, Bryan A.

    1996-01-01

    An unresolved problem in remote sensing concerns the analysis of satellite imagery containing both single and multiple cloud layers. While cloud parameterizations are very important both in global climate models and in studies of the Earth's radiation budget, most cloud retrieval schemes, such as the bispectral method used by the International Satellite Cloud Climatology Project (ISCCP), have no way of determining whether overlapping cloud layers exist in any group of satellite pixels. Coakley (1983) used a spatial coherence method to determine whether a region contained more than one cloud layer. Baum et al. (1995) developed a scheme for detection and analysis of daytime multiple cloud layers using merged AVHRR (Advanced Very High Resolution Radiometer) and HIRS (High-resolution Infrared Radiometer Sounder) data collected during the First ISCCP Regional Experiment (FIRE) Cirrus 2 field campaign. Baum et al. (1995) explored the use of a cloud classification technique based on AVHRR data. This study examines the feasibility of applying the cloud classifier to global satellite imagery.

  18. Ice crystals classification using airborne measurements in mixing phase

    NASA Astrophysics Data System (ADS)

    Sorin Vajaiac, Nicolae; Boscornea, Andreea

    2017-04-01

    This paper presents a case study of ice crystals classification from airborne measurements in mixed-phase clouds. Ice crystal shadow is recorded with CIP (Cloud Imaging Probe) component of CAPS (Cloud, Aerosol, and Precipitation Spectrometer) system. The analyzed flight was performed in the south-western part of Romania (between Pietrosani, Ramnicu Valcea, Craiova and Targu Jiu), with a Beechcraft C90 GTX which was specially equipped with a CAPS system. The temperature, during the fly, reached the lowest value at -35 °C. These low temperatures allow the formation of ice crystals and influence their form. For the here presented ice crystals classification a special software, OASIS (Optical Array Shadow Imaging Software), developed by DMT (Droplet Measurement Technologies), was used. The obtained results, as expected are influenced by the atmospheric and microphysical parameters. The particles recorded where classified in four groups: edge, irregular, round and small.

  19. Automated Visibility & Cloud Cover Measurements with a Solid State Imaging System

    DTIC Science & Technology

    1989-03-01

    GL-TR-89-0061 SIO Ref. 89-7 MPL-U-26/89 AUTOMATED VISIBILITY & CLOUD COVER MEASUREMENTS WITH A SOLID-STATE IMAGING SYSTEM C) to N4 R. W. Johnson W. S...include Security Classification) Automated Visibility & Cloud Measurements With A Solid State Imaging System 12. PERSONAL AUTHOR(S) Richard W. Johnson...based imaging systems , their ics and control algorithms, thus they ar.L discussed sepa- initial deployment and the preliminary application of rately

  20. Classification of Mobile Laser Scanning Point Clouds from Height Features

    NASA Astrophysics Data System (ADS)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  1. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  2. Classification of cloud fields based on textural characteristics

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1987-01-01

    The present study reexamines the applicability of texture-based features for automatic cloud classification using very high spatial resolution (57 m) Landsat multispectral scanner digital data. It is concluded that cloud classification can be accomplished using only a single visible channel.

  3. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  4. A conceptual weather-type classification procedure for the Philadelphia, Pennsylvania, area

    USGS Publications Warehouse

    McCabe, Gregory J.

    1990-01-01

    A simple method of weather-type classification, based on a conceptual model of pressure systems that pass through the Philadelphia, Pennsylvania, area, has been developed. The only inputs required for the procedure are daily mean wind direction and cloud cover, which are used to index the relative position of pressure systems and fronts to Philadelphia.Daily mean wind-direction and cloud-cover data recorded at Philadelphia, Pennsylvania, from January 1954 through August 1988 were used to categorize daily weather conditions. The conceptual weather types reflect changes in daily air and dew-point temperatures, and changes in monthly mean temperature and monthly and annual precipitation. The weather-type classification produced by using the conceptual model was similar to a classification produced by using a multivariate statistical classification procedure. Even though the conceptual weather types are derived from a small amount of data, they appear to account for the variability of daily weather patterns sufficiently to describe distinct weather conditions for use in environmental analyses of weather-sensitive processes.

  5. Cloud cover typing from environmental satellite imagery. Discriminating cloud structure with Fast Fourier Transforms (FFT)

    NASA Technical Reports Server (NTRS)

    Logan, T. L.; Huning, J. R.; Glackin, D. L.

    1983-01-01

    The use of two dimensional Fast Fourier Transforms (FFTs) subjected to pattern recognition technology for the identification and classification of low altitude stratus cloud structure from Geostationary Operational Environmental Satellite (GOES) imagery was examined. The development of a scene independent pattern recognition methodology, unconstrained by conventional cloud morphological classifications was emphasized. A technique for extracting cloud shape, direction, and size attributes from GOES visual imagery was developed. These attributes were combined with two statistical attributes (cloud mean brightness, cloud standard deviation), and interrogated using unsupervised clustering amd maximum likelihood classification techniques. Results indicate that: (1) the key cloud discrimination attributes are mean brightness, direction, shape, and minimum size; (2) cloud structure can be differentiated at given pixel scales; (3) cloud type may be identifiable at coarser scales; (4) there are positive indications of scene independence which would permit development of a cloud signature bank; (5) edge enhancement of GOES imagery does not appreciably improve cloud classification over the use of raw data; and (6) the GOES imagery must be apodized before generation of FFTs.

  6. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  7. UAS-SfM for coastal research: Geomorphic feature extraction and land cover classification from high-resolution elevation and optical imagery

    USGS Publications Warehouse

    Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel

    2017-01-01

    The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.

  8. Classification of large-scale fundus image data sets: a cloud-computing framework.

    PubMed

    Roychowdhury, Sohini

    2016-08-01

    Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.

  9. Road traffic sign detection and classification from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng

    2016-03-01

    Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.

  10. Tree Classification with Fused Mobile Laser Scanning and Hyperspectral Data

    PubMed Central

    Puttonen, Eetu; Jaakkola, Anttoni; Litkey, Paula; Hyyppä, Juha

    2011-01-01

    Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin. PMID:22163894

  11. Tree classification with fused mobile laser scanning and hyperspectral data.

    PubMed

    Puttonen, Eetu; Jaakkola, Anttoni; Litkey, Paula; Hyyppä, Juha

    2011-01-01

    Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin.

  12. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  13. Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment

    NASA Astrophysics Data System (ADS)

    Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner

    2017-04-01

    Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and prediction of morphodynamics in a natural coastal system in response to climate change" (Steno Grant no. 10-081102) and by the Geocenter Denmark through the project "Closing the gap! - Coherent land-water environmental mapping (LAWA)" (Grant no. 4-2015).

  14. A cloud-based system for automatic glaucoma screening.

    PubMed

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.

  15. Automated cloud classification with a fuzzy logic expert system

    NASA Technical Reports Server (NTRS)

    Tovinkere, Vasanth; Baum, Bryan A.

    1993-01-01

    An unresolved problem in current cloud retrieval algorithms concerns the analysis of scenes containing overlapping cloud layers. Cloud parameterizations are very important both in global climate models and in studies of the Earth's radiation budget. Most cloud retrieval schemes, such as the bispectral method used by the International Satellite Cloud Climatology Project (ISCCP), have no way of determining whether overlapping cloud layers exist in any group of satellite pixels. One promising method uses fuzzy logic to determine whether mixed cloud and/or surface types exist within a group of pixels, such as cirrus, land, and water, or cirrus and stratus. When two or more class types are present, fuzzy logic uses membership values to assign the group of pixels partially to the different class types. The strength of fuzzy logic lies in its ability to work with patterns that may include more than one class, facilitating greater information extraction from satellite radiometric data. The development of the fuzzy logic rule-based expert system involves training the fuzzy classifier with spectral and textural features calculated from accurately labeled 32x32 regions of Advanced Very High Resolution Radiometer (AVHRR) 1.1-km data. The spectral data consists of AVHRR channels 1 (0.55-0.68 mu m), 2 (0.725-1.1 mu m), 3 (3.55-3.93 mu m), 4 (10.5-11.5 mu m), and 5 (11.5-12.5 mu m), which include visible, near-infrared, and infrared window regions. The textural features are based on the gray level difference vector (GLDV) method. A sophisticated new interactive visual image Classification System (IVICS) is used to label samples chosen from scenes collected during the FIRE IFO II. The training samples are chosen from predefined classes, chosen to be ocean, land, unbroken stratiform, broken stratiform, and cirrus. The November 28, 1991 NOAA overpasses contain complex multilevel cloud situations ideal for training and validating the fuzzy logic expert system.

  16. Photometry and Classification of Stars around the Reflection Nebula NGC 7023 IN Cepheus. II. Interstellar Extinction and Cloud Distances

    NASA Astrophysics Data System (ADS)

    Zdanavičius, K.; Zdanavičius, J.; Straižys, V.; Maskoliūnas, M.

    Interstellar extinction is investigated in a 1.5 square degree area in the direction of the reflection nebula NGC 7023 at ℓ = 104.1\\degr, b = +14.2\\degr. The study is based on photometric classification and the determination of interstellar extinctions and distances of 480 stars down to V = 16.5 mag from photometry in the Vilnius seven-color system published in Paper I (2008). The investigated area is divided into five smaller subareas with slightly different dependence of the extinction on distance. The distribution of reddened stars is in accordance with the presence of two dust clouds at 282 pc and 715 pc, however in some directions the dust distribution can be continuous or more clouds can be present.

  17. Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics

    NASA Technical Reports Server (NTRS)

    Bankert, Richard L.; Mitrescu, Cristian; Miller, Steven D.; Wade, Robert H.

    2009-01-01

    Cloud-type classification based on multispectral satellite imagery data has been widely researched and demonstrated to be useful for distinguishing a variety of classes using a wide range of methods. The research described here is a comparison of the classifier output from two very different algorithms applied to Geostationary Operational Environmental Satellite (GOES) data over the course of one year. The first algorithm employs spectral channel thresholding and additional physically based tests. The second algorithm was developed through a supervised learning method with characteristic features of expertly labeled image samples used as training data for a 1-nearest-neighbor classification. The latter's ability to identify classes is also based in physics, but those relationships are embedded implicitly within the algorithm. A pixel-to-pixel comparison analysis was done for hourly daytime scenes within a region in the northeastern Pacific Ocean. Considerable agreement was found in this analysis, with many of the mismatches or disagreements providing insight to the strengths and limitations of each classifier. Depending upon user needs, a rule-based or other postprocessing system that combines the output from the two algorithms could provide the most reliable cloud-type classification.

  18. An Examination of the Nature of Global MODIS Cloud Regimes

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Cho, Nayeong; Lee, Dongmin; Kato, Seiji; Huffman, George J.

    2014-01-01

    We introduce global cloud regimes (previously also referred to as "weather states") derived from cloud retrievals that use measurements by the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Aqua and Terra satellites. The regimes are obtained by applying clustering analysis on joint histograms of retrieved cloud top pressure and cloud optical thickness. By employing a compositing approach on data sets from satellites and other sources, we examine regime structural and thermodynamical characteristics. We establish that the MODIS cloud regimes tend to form in distinct dynamical and thermodynamical environments and have diverse profiles of cloud fraction and water content. When compositing radiative fluxes from the Clouds and the Earth's Radiant Energy System instrument and surface precipitation from the Global Precipitation Climatology Project, we find that regimes with a radiative warming effect on the atmosphere also produce the largest implied latent heat. Taken as a whole, the results of the study corroborate the usefulness of the cloud regime concept, reaffirm the fundamental nature of the regimes as appropriate building blocks for cloud system classification, clarify their association with standard cloud types, and underscore their distinct radiative and hydrological signatures.

  19. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  20. Spatial characteristics of the tropical cloud systems: comparison between model simulation and satellite observations

    NASA Astrophysics Data System (ADS)

    Zhang, Guang J.; Zurovac-Jevtic, Dance; Boer, Erwin R.

    1999-10-01

    A Lagrangian cloud classification algorithm is applied to the cloud fields in the tropical Pacific simulated by a high-resolution regional atmospheric model. The purpose of this work is to assess the model's ability to reproduce the observed spatial characteristics of the tropical cloud systems. The cloud systems are broadly grouped into three categories: deep clouds, mid-level clouds and low clouds. The deep clouds are further divided into mesoscale convective systems and non-mesoscale convective systems. It is shown that the model is able to simulate the total cloud cover for each category reasonably well. However, when the cloud cover is broken down into contributions from cloud systems of different sizes, it is shown that the simulated cloud size distribution is biased toward large cloud systems, with contribution from relatively small cloud systems significantly under-represented in the model for both deep and mid-level clouds. The number distribution and area contribution to the cloud cover from mesoscale convective systems are very well simulated compared to the satellite observations, so are low clouds as well. The dependence of the cloud physical properties on cloud scale is examined. It is found that cloud liquid water path, rainfall, and ocean surface sensible and latent heat fluxes have a clear dependence on cloud types and scale. This is of particular interest to studies of the cloud effects on surface energy budget and hydrological cycle. The diurnal variation of the cloud population and area is also examined. The model exhibits a varying degree of success in simulating the diurnal variation of the cloud number and area. The observed early morning maximum cloud cover in deep convective cloud systems is qualitatively simulated. However, the afternoon secondary maximum is missing in the model simulation. The diurnal variation of the tropospheric temperature is well reproduced by the model while simulation of the diurnal variation of the moisture field is poor. The implication of this comparison between model simulation and observations on cloud parameterization is discussed.

  1. Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.

    PubMed

    Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L

    2015-07-01

    The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.

  2. An Imager Gaussian Process Machine Learning Methodology for Cloud Thermodynamic Phase classification

    NASA Astrophysics Data System (ADS)

    Marchant, B.; Platnick, S. E.; Meyer, K.

    2017-12-01

    The determination of cloud thermodynamic phase from MODIS and VIIRS instruments is an important first step in cloud optical retrievals, since ice and liquid clouds have different optical properties. To continue improving the cloud thermodynamic phase classification algorithm, a machine-learning approach, based on Gaussian processes, has been developed. The new proposed methodology provides cloud phase uncertainty quantification and improves the algorithm portability between MODIS and VIIRS. We will present new results, through comparisons between MODIS and CALIOP v4, and for VIIRS as well.

  3. Cloud-scale genomic signals processing classification analysis for gene expression microarray data.

    PubMed

    Harvey, Benjamin; Soo-Yeon Ji

    2014-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring inference though analysis of DNA/mRNA sequence data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological inference by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale classification analysis of microarray data using Wavelet thresholding in a Cloud environment to identify significantly expressed features. This paper proposes a novel methodology that uses Wavelet based Denoising to initialize a threshold for determination of significantly expressed genes for classification. Additionally, this research was implemented and encompassed within cloud-based distributed processing environment. The utilization of Cloud computing and Wavelet thresholding was used for the classification 14 tumor classes from the Global Cancer Map (GCM). The results proved to be more accurate than using a predefined p-value for differential expression classification. This novel methodology analyzed Wavelet based threshold features of gene expression in a Cloud environment, furthermore classifying the expression of samples by analyzing gene patterns, which inform us of biological processes. Moreover, enabling researchers to face the present and forthcoming challenges that may arise in the analysis of data in functional genomics of large microarray datasets.

  4. Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds

    NASA Astrophysics Data System (ADS)

    Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert

    2014-06-01

    Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.

  5. Cloud cover analysis with Arctic Advanced Very High Resolution Radiometer data. II - Classification with spectral and textural measures

    NASA Technical Reports Server (NTRS)

    Key, J.

    1990-01-01

    The spectral and textural characteristics of polar clouds and surfaces for a 7-day summer series of AVHRR data in two Arctic locations are examined, and the results used in the development of a cloud classification procedure for polar satellite data. Since spatial coherence and texture sensitivity tests indicate that a joint spectral-textural analysis based on the same cell size is inappropriate, cloud detection with AVHRR data and surface identification with passive microwave data are first done on the pixel level as described by Key and Barry (1989). Next, cloud patterns within 250-sq-km regions are described, then the spectral and local textural characteristics of cloud patterns in the image are determined and each cloud pixel is classified by statistical methods. Results indicate that both spectral and textural features can be utilized in the classification of cloudy pixels, although spectral features are most useful for the discrimination between cloud classes.

  6. Examining the NZESM Cloud representation with Self Organizing Maps

    NASA Astrophysics Data System (ADS)

    Schuddeboom, Alex; McDonald, Adrian; Parsons, Simon; Morgenstern, Olaf; Harvey, Mike

    2017-04-01

    Several different cloud regimes are identified from MODIS satellite data and the representation of these regimes within the New Zealand Earth System Model (NZESM) is examined. For the development of our cloud classification we utilize a neural network algorithm known as self organizing maps (SOMs) on MODIS cloud top pressure - cloud optical thickness joint histograms. To evaluate the representation of the cloud within NZESM, the frequency and geographical distribution of the regimes is compared between the NZESM and satellite data. This approach has the advantage of not only identifying differences, but also potentially giving additional information about the discrepancy such as in which regions or phases of cloud the differences are most prominent. To allow for a more direct comparison between datasets, the COSP satellite simulation software is applied to NZESM output. COSP works by simulating the observational processes linked to a satellite, within the GCM, so that data can be generated in a way that shares the particular observational bias of specific satellites. By taking the COSP joint histograms and comparing them to our existing classifications we can easily search for discrepancies between the observational data and the simulations without having to be cautious of biases introduced by the satellite. Preliminary results, based on data for 2008, show a significant decrease in overall cloud fraction in the NZESM compared to the MODIS satellite data. To better understand the nature of this discrepancy, the cloud fraction related to different cloud heights and phases were also analysed.

  7. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  8. Investigating the differences of cirrus cloud properties in nucleation, growth and sublimation regions based on airborne water vapor lidar measurements

    NASA Astrophysics Data System (ADS)

    Urbanek, Benedikt; Groß, Silke; Wirth, Martin

    2017-04-01

    Cirrus clouds impose high uncertainties on weather and climate prediction, as knowledge on important processes is still incomplete. For instance it remains unclear how cloud optical, microphysical, and radiative properties change as the cirrus evolves. To gain better understanding of cirrus clouds, their optical and microphysical properties and their changes with cirrus cloud evolution the ML-CIRRUS campaign was conducted in March and April 2014. Measurements with a combined in-situ and remote sensing payload were performed with the German research aircraft HALO based in Oberpfaffenhofen. 16 research flights with altogether 88 flight hours were performed over the North-Atlantic, western and central Europe to probe different cirrus cloud regimes and cirrus clouds at different stages of evolution. One of the key remotes sensing instruments during ML-CIRRUS was the airborne differential absorption and high spectral lidar system WALES. It measures the 2-dimensional distribution of water vapor inside and outside of cirrus clouds as well as the optical properties of the clouds. Bases on these airborne lidar measurements a novel classification scheme to derive the stage of cirrus cloud evolution was developed. It identifies regions of ice nucleation, particle growth by deposition of water vapor, and ice sublimation. This method is used to investigate differences in the distribution and value of optical properties as well as in the distribution of water vapor and relative humidity depending on the stage of evolution of the cloud. We will present the lidar based classification scheme and its application on a wave driven cirrus cloud case, and we will show first results of the dependence of optical cloud properties and relative humidity distributions on the determined stage of evolution.

  9. A quantitative analysis of IRAS maps of molecular clouds

    NASA Technical Reports Server (NTRS)

    Wiseman, Jennifer J.; Adams, Fred C.

    1994-01-01

    We present an analysis of IRAS maps of five molecular clouds: Orion, Ophiuchus, Perseus, Taurus, and Lupus. For the classification and description of these astrophysical maps, we use a newly developed technique which considers all maps of a given type to be elements of a pseudometric space. For each physical characteristic of interest, this formal system assigns a distance function (a pseudometric) to the space of all maps: this procedure allows us to measure quantitatively the difference between any two maps and to order the space of all maps. We thus obtain a quantitative classification scheme for molecular clouds. In this present study we use the IRAS continuum maps at 100 and 60 micrometer(s) to produce column density (or optical depth) maps for the five molecular cloud regions given above. For this sample of clouds, we compute the 'output' functions which measure the distribution of density, the distribution of topological components, the self-gravity, and the filamentary nature of the clouds. The results of this work provide a quantitative description of the structure in these molecular cloud regions. We then order the clouds according to the overall environmental 'complexity' of these star-forming regions. Finally, we compare our results with the observed populations of young stellar objects in these clouds and discuss the possible environmental effects on the star-formation process. Our results are consistent with the recently stated conjecture that more massive stars tend to form in more 'complex' environments.

  10. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  11. Cloud classification from satellite data using a fuzzy sets algorithm: A polar example

    NASA Technical Reports Server (NTRS)

    Key, J. R.; Maslanik, J. A.; Barry, R. G.

    1988-01-01

    Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.

  12. Classification of Clouds and Deep Convection from GEOS-5 Using Satellite Observations

    NASA Technical Reports Server (NTRS)

    Putman, William; Suarez, Max

    2010-01-01

    With the increased resolution of global atmospheric models and the push toward global cloud resolving models, the resemblance of model output to satellite observations has become strikingly similar. As we progress with our adaptation of the Goddard Earth Observing System Model, Version 5 (GEOS-5) as a high resolution cloud system resolving model, evaluation of cloud properties and deep convection require in-depth analysis beyond a visual comparison. Outgoing long-wave radiation (OLR) provides a sufficient comparison with infrared (IR) satellite imagery to isolate areas of deep convection. We have adopted a binning technique to generate a series of histograms for OLR which classify the presence and fraction of clear sky versus deep convection in the tropics that can be compared with a similar analyses of IR imagery from composite Geostationary Operational Environmental Satellite (GOES) observations. We will present initial results that have been used to evaluate the amount of deep convective parameterization required within the model as we move toward cloud system resolving resolutions of 10- to 1-km globally.

  13. AIRS Subpixel Cloud Characterization Using MODIS Cloud Products.

    NASA Astrophysics Data System (ADS)

    Li, Jun; Menzel, W. Paul; Sun, Fengying; Schmit, Timothy J.; Gurka, James

    2004-08-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) and the Atmospheric Infrared Sounder (AIRS) measurements from the Earth Observing System's (EOS's) Aqua satellite enable improved global monitoring of the distribution of clouds. MODIS is able to provide, at high spatial resolution (1 5 km), a cloud mask, surface and cloud types, cloud phase, cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), and cloud optical thickness (COT). AIRS is able to provide CTP, ECA, CPS, and COT at coarser spatial resolution (13.5 km at nadir) but with much better accuracy using its high-spectral-resolution measurements. The combined MODIS AIRS system offers the opportunity for improved cloud products over those possible from either system alone. The key steps for synergistic use of imager and sounder radiance measurements are 1) collocation in space and time and 2) imager cloud amount, type, and phase determination within the sounder pixel. The MODIS and AIRS measurements from the EOS Aqua satellite provide the opportunity to study the synergistic use of advanced imager and sounder measurements. As the first step, the MODIS classification procedure is applied to identify various surface and cloud types within an AIRS footprint. Cloud-layer information (lower, midlevel, or high clouds) and phase information (water, ice, or mixed-phase clouds) within the AIRS footprint are sorted and characterized using MODIS 1-km-spatial-resolution data. The combined MODIS and AIRS data for various scenes are analyzed to study the utility of the synergistic use of high-spatial-resolution imager products and high-spectral-resolution sounder radiance measurements. There is relevance to the optimal use of data from the Advanced Baseline Imager (ABI) and Hyperspectral Environmental Suite (HES) systems, which are to fly on the Geostationary Operational Environmental Satellite (GOES)-R.


  14. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  15. Estimating Cloud Cover

    ERIC Educational Resources Information Center

    Moseley, Christine

    2007-01-01

    The purpose of this activity was to help students understand the percentage of cloud cover and make more accurate cloud cover observations. Students estimated the percentage of cloud cover represented by simulated clouds and assigned a cloud cover classification to those simulations. (Contains 2 notes and 3 tables.)

  16. Cloud cover determination in polar regions from satellite imagery

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Maslanik, J. A.; Key, J. R.

    1987-01-01

    A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.

  17. Using Radar, Lidar, and Radiometer measurements to Classify Cloud Type and Study Middle-Level Cloud Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    2010-06-29

    The project is mainly focused on the characterization of cloud macrophysical and microphysical properties, especially for mixed-phased clouds and middle level ice clouds by combining radar, lidar, and radiometer measurements available from the ACRF sites. First, an advanced mixed-phase cloud retrieval algorithm will be developed to cover all mixed-phase clouds observed at the ACRF NSA site. The algorithm will be applied to the ACRF NSA observations to generate a long-term arctic mixed-phase cloud product for model validations and arctic mixed-phase cloud processes studies. To improve the representation of arctic mixed-phase clouds in GCMs, an advanced understanding of mixed-phase cloud processesmore » is needed. By combining retrieved mixed-phase cloud microphysical properties with in situ data and large-scale meteorological data, the project aim to better understand the generations of ice crystals in supercooled water clouds, the maintenance mechanisms of the arctic mixed-phase clouds, and their connections with large-scale dynamics. The project will try to develop a new retrieval algorithm to study more complex mixed-phase clouds observed at the ACRF SGP site. Compared with optically thin ice clouds, optically thick middle level ice clouds are less studied because of limited available tools. The project will develop a new two wavelength radar technique for optically thick ice cloud study at SGP site by combining the MMCR with the W-band radar measurements. With this new algorithm, the SGP site will have a better capability to study all ice clouds. Another area of the proposal is to generate long-term cloud type classification product for the multiple ACRF sites. The cloud type classification product will not only facilitates the generation of the integrated cloud product by applying different retrieval algorithms to different types of clouds operationally, but will also support other research to better understand cloud properties and to validate model simulations. The ultimate goal is to improve our cloud classification algorithm into a VAP.« less

  18. Conference on Satellite Meteorology and Oceanography, 6th, Atlanta, GA, Jan. 5-10, 1992, Preprints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The present volume on satellite meteorology and oceanography discusses cloud retrieval from collocated IR sounder data and imaging systems, satellite retrievals of marine stratiform cloud systems, multispectral analysis of satellite observations of smoke and dust, and image and graphical analysis of principal components of satellite sounding channels. Attention is given to an evaluation of results from classification retrieval methods, the use of TOVS radiances, estimation of path radiance on the basis of remotely sensed data, and a reexamination of SST as a predictor for tropical storm intensity. Topics addressed include optimal smoothing of GOES VAS for upper-atmosphere thermal waves, obtainingmore » cloud motion vectors from polar orbiting satellites, the use of cloud relative animation in the analysis of satellite data, and investigations of a polar low using geostationary satellite data.« less

  19. Classification of forest land attributes using multi-source remotely sensed data

    NASA Astrophysics Data System (ADS)

    Pippuri, Inka; Suvanto, Aki; Maltamo, Matti; Korhonen, Kari T.; Pitkänen, Juho; Packalen, Petteri

    2016-02-01

    The aim of the study was to (1) examine the classification of forest land using airborne laser scanning (ALS) data, satellite images and sample plots of the Finnish National Forest Inventory (NFI) as training data and to (2) identify best performing metrics for classifying forest land attributes. Six different schemes of forest land classification were studied: land use/land cover (LU/LC) classification using both national classes and FAO (Food and Agricultural Organization of the United Nations) classes, main type, site type, peat land type and drainage status. Special interest was to test different ALS-based surface metrics in classification of forest land attributes. Field data consisted of 828 NFI plots collected in 2008-2012 in southern Finland and remotely sensed data was from summer 2010. Multinomial logistic regression was used as the classification method. Classification of LU/LC classes were highly accurate (kappa-values 0.90 and 0.91) but also the classification of site type, peat land type and drainage status succeeded moderately well (kappa-values 0.51, 0.69 and 0.52). ALS-based surface metrics were found to be the most important predictor variables in classification of LU/LC class, main type and drainage status. In best classification models of forest site types both spectral metrics from satellite data and point cloud metrics from ALS were used. In turn, in the classification of peat land types ALS point cloud metrics played the most important role. Results indicated that the prediction of site type and forest land category could be incorporated into stand level forest management inventory system in Finland.

  20. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  1. Classification of Aerial Photogrammetric 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.

    2017-05-01

    We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.

  2. Cloud Impacts on Pavement Temperature in Energy Balance Models

    NASA Astrophysics Data System (ADS)

    Walker, C. L.

    2013-12-01

    Forecast systems provide decision support for end-users ranging from the solar energy industry to municipalities concerned with road safety. Pavement temperature is an important variable when considering vehicle response to various weather conditions. A complex, yet direct relationship exists between tire and pavement temperatures. Literature has shown that as tire temperature increases, friction decreases which affects vehicle performance. Many forecast systems suffer from inaccurate radiation forecasts resulting in part from the inability to model different types of clouds and their influence on radiation. This research focused on forecast improvement by determining how cloud type impacts the amount of shortwave radiation reaching the surface and subsequent pavement temperatures. The study region was the Great Plains where surface solar radiation data were obtained from the High Plains Regional Climate Center's Automated Weather Data Network stations. Road pavement temperature data were obtained from the Meteorological Assimilation Data Ingest System. Cloud properties and radiative transfer quantities were obtained from the Clouds and Earth's Radiant Energy System mission via Aqua and Terra Moderate Resolution Imaging Spectroradiometer satellite products. An additional cloud data set was incorporated from the Naval Research Laboratory Cloud Classification algorithm. Statistical analyses using a modified nearest neighbor approach were first performed relating shortwave radiation variability with road pavement temperature fluctuations. Then statistical associations were determined between the shortwave radiation and cloud property data sets. Preliminary results suggest that substantial pavement forecasting improvement is possible with the inclusion of cloud-specific information. Future model sensitivity testing seeks to quantify the magnitude of forecast improvement.

  3. Accurate mobile malware detection and classification in the cloud.

    PubMed

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  4. Augmenting Satellite Precipitation Estimation with Lightning Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahrooghy, Majid; Anantharaj, Valentine G; Younan, Nicolas H.

    2013-01-01

    We have used lightning information to augment the Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network - Cloud Classification System (PERSIANN-CCS). Co-located lightning data are used to segregate cloud patches, segmented from GOES-12 infrared data, into either electrified (EL) or non-electrified (NEL) patches. A set of features is extracted separately for the EL and NEL cloud patches. The features for the EL cloud patches include new features based on the lightning information. The cloud patches are classified and clustered using self-organizing maps (SOM). Then brightness temperature and rain rate (T-R) relationships are derived for the different clusters.more » Rain rates are estimated for the cloud patches based on their representative T-R relationship. The Equitable Threat Score (ETS) for daily precipitation estimates is improved by almost 12% for the winter season. In the summer, no significant improvements in ETS are noted.« less

  5. An approach for combining airborne LiDAR and high-resolution aerial color imagery using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Liu, Yansong; Monteiro, Sildomar T.; Saber, Eli

    2015-10-01

    Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.

  6. Automated detection of cloud and cloud-shadow in single-date Landsat imagery using neural networks and spatial post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael J.; Hayes, Daniel J

    2014-01-01

    Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less

  7. Identifying Meteorological Controls on Open and Closed Mesoscale Cellular Convection Associated with Marine Cold Air Outbreaks

    NASA Astrophysics Data System (ADS)

    McCoy, Isabel L.; Wood, Robert; Fletcher, Jennifer K.

    2017-11-01

    Mesoscale cellular convective (MCC) clouds occur in large-scale patterns over the ocean and have important radiative effects on the climate system. An examination of time-varying meteorological conditions associated with satellite-observed open and closed MCC clouds is conducted to illustrate the influence of large-scale meteorological conditions. Marine cold air outbreaks (MCAO) influence the development of open MCC clouds and the transition from closed to open MCC clouds. MCC neural network classifications on Moderate Resolution Imaging Spectroradiometer (MODIS) data for 2008 are collocated with Clouds and the Earth's Radiant Energy System (CERES) data and ERA-Interim reanalysis to determine the radiative effects of MCC clouds and their thermodynamic environments. Closed MCC clouds are found to have much higher albedo on average than open MCC clouds for the same cloud fraction. Three meteorological control metrics are tested: sea-air temperature difference (ΔT), estimated inversion strength (EIS), and a MCAO index (M). These predictive metrics illustrate the importance of atmospheric surface forcing and static stability for open and closed MCC cloud formation. Predictive sigmoidal relations are found between M and MCC cloud frequency globally and regionally: negative for closed MCC cloud and positive for open MCC cloud. The open MCC cloud seasonal cycle is well correlated with M, while the seasonality of closed MCC clouds is well correlated with M in the midlatitudes and EIS in the tropics and subtropics. M is found to best distinguish open and closed MCC clouds on average over shorter time scales. The possibility of a MCC cloud feedback is discussed.

  8. Characterizing relative humidity with respect to ice in midlatitude cirrus clouds as a function of atmospheric state

    NASA Astrophysics Data System (ADS)

    Dzambo, Andrew M.; Turner, David D.

    2016-10-01

    Midlatitude cirrus cloud macrophysical and microphysical properties have been shown in previous studies to vary seasonally and in various large-scale dynamical regimes, but relative humidity with respect to ice (RHI) within cirrus clouds has not been studied extensively in this context. Using a combination of radiosonde and millimeter-wavelength cloud radar data, we identify 1076 cirrus clouds spanning a 7 year period from 2004 to 2011. These data are separated into five classes using a previously published algorithm that is based largely on synoptic conditions. Using these data and classification scheme, we find that RHI in cirrus clouds varies seasonally. Variations in cirrus cloud RHI exist within the prescribed classifications; however, most of the variations are within the measurement uncertainty. Additionally, with the exception of nonsummer class cirrus, these variations are not statistically significant. We also find that cirrus cloud occurrence is not necessarily correlated with higher observed values of RHI. The structure of RHI in cirrus clouds varies more in thicker clouds, which follows previous studies showing that macrophysical and microphysical variability increases in thicker cirrus clouds.

  9. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  10. Investigation of cloud/water vapor motion winds from geostationary satellite

    NASA Technical Reports Server (NTRS)

    Nieman, Steve; Velden, Chris; Hayden, Kit; Menzel, Paul

    1993-01-01

    Work has been primarily focussed on three tasks: (1) comparison of wind fields produced at MSFC with the CO2 autowind/autoeditor system newly installed in NESDIS operations; (2) evaluation of techniques for improved tracer selection through use of cloud classification predictors; and (3) development of height assignment algorithm with water vapor channel radiances. The contract goal is to improve the CIMSS wind system by developing new techniques and assimilating better existing techniques. The work reported here was done in collaboration with the NESDIS scientists working on the operational winds software, so that NASA funded research can benefit NESDIS operational algorithms.

  11. Robust point cloud classification based on multi-level semantic relationships for urban scenes

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo

    2017-07-01

    The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.

  12. A 2dF survey of the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Evans, Christopher J.; Howarth, Ian D.; Irwin, Michael J.; Burnley, Adam W.; Harries, Timothy J.

    2004-09-01

    We present a catalogue of new spectral types for hot, luminous stars in the Small Magellanic Cloud (SMC). The catalogue contains 4161 objects, giving an order-of-magnitude increase in the number of SMC stars with published spectroscopic classifications. The targets are primarily B- and A-type stars (2862 and 853 objects respectively), with one Wolf-Rayet, 139 O-type and 306 FG stars, sampling the main sequence to ~mid-B. The selection and classification criteria are described, and objects of particular interest are discussed, including UV-selected targets from the Ultraviolet Imaging Telescope (UIT) experiment, Be and B[e] stars, `anomalous A supergiants' and composite-spectrum systems. We examine the incidence of Balmer-line emission, and the relationship between Hγ equivalent width and absolute magnitude for BA stars.

  13. Hybrid Automatic Building Interpretation System

    NASA Astrophysics Data System (ADS)

    Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.

    2011-09-01

    HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.

  14. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  15. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  16. Integrated Change Detection and Classification in Urban Areas Based on Airborne Laser Scanning Point Clouds.

    PubMed

    Tran, Thi Huong Giang; Ressl, Camillo; Pfeifer, Norbert

    2018-02-03

    This paper suggests a new approach for change detection (CD) in 3D point clouds. It combines classification and CD in one step using machine learning. The point cloud data of both epochs are merged for computing features of four types: features describing the point distribution, a feature relating to relative terrain elevation, features specific for the multi-target capability of laser scanning, and features combining the point clouds of both epochs to identify the change. All these features are merged in the points and then training samples are acquired to create the model for supervised classification, which is then applied to the whole study area. The final results reach an overall accuracy of over 90% for both epochs of eight classes: lost tree, new tree, lost building, new building, changed ground, unchanged building, unchanged tree, and unchanged ground.

  17. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  18. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  19. Waves on Ice

    Atmospheric Science Data Center

    2013-04-16

    article title:  Waves on White: Ice or Clouds?     View Larger ... like a wavy cloud pattern was actually a wavy pattern on the ice surface. One of MISR's cloud classification products, the Angular Signature ...

  20. Infrared Cloud Imager Development for Atmospheric Optical Communication Characterization, and Measurements at the JPL Table Mountain Facility

    NASA Astrophysics Data System (ADS)

    Nugent, P. W.; Shaw, J. A.; Piazzolla, S.

    2013-02-01

    The continuous demand for high data return in deep space and near-Earth satellite missions has led NASA and international institutions to consider alternative technologies for high-data-rate communications. One solution is the establishment of wide-bandwidth Earth-space optical communication links, which require (among other things) a nearly obstruction-free atmospheric path. Considering the atmospheric channel, the most common and most apparent impairments on Earth-space optical communication paths arise from clouds. Therefore, the characterization of the statistical behavior of cloud coverage for optical communication ground station candidate sites is of vital importance. In this article, we describe the development and deployment of a ground-based, long-wavelength infrared cloud imaging system able to monitor and characterize the cloud coverage. This system is based on a commercially available camera with a 62-deg diagonal field of view. A novel internal-shutter-based calibration technique allows radiometric calibration of the camera, which operates without a thermoelectric cooler. This cloud imaging system provides continuous day-night cloud detection with constant sensitivity. The cloud imaging system also includes data-processing algorithms that calculate and remove atmospheric emission to isolate cloud signatures, and enable classification of clouds according to their optical attenuation. Measurements of long-wavelength infrared cloud radiance are used to retrieve the optical attenuation (cloud optical depth due to absorption and scattering) in the wavelength range of interest from visible to near-infrared, where the cloud attenuation is quite constant. This article addresses the specifics of the operation, calibration, and data processing of the imaging system that was deployed at the NASA/JPL Table Mountain Facility (TMF) in California. Data are reported from July 2008 to July 2010. These data describe seasonal variability in cloud cover at the TMF site, with cloud amount (percentage of cloudy pixels) peaking at just over 51 percent during February, of which more than 60 percent had optical attenuation exceeding 12 dB at wavelengths in the range from the visible to the near-infrared. The lowest cloud amount was found during August, averaging 19.6 percent, and these clouds were mostly optically thin, with low attenuation.

  1. Modis Collection 6 Shortwave-Derived Cloud Phase Classification Algorithm and Comparisons with CALIOP

    NASA Technical Reports Server (NTRS)

    Marchant, Benjamin; Platnick, Steven; Meyer, Kerry; Arnold, George Thomas; Riedi, Jerome

    2016-01-01

    Cloud thermodynamic phase (e.g., ice, liquid) classification is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate-Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial errors in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm and CALIOP. A wholesale improvement is seen for C6 compared to C5.

  2. Cloud, Aerosol, and Volcanic Ash Retrievals Using ASTR and SLSTR with ORAC

    NASA Astrophysics Data System (ADS)

    McGarragh, Gregory; Poulsen, Caroline; Povey, Adam; Thomas, Gareth; Christensen, Matt; Sus, Oliver; Schlundt, Cornelia; Stapelberg, Stefan; Stengel, Martin; Grainger, Don

    2015-12-01

    The Optimal Retrieval of Aerosol and Cloud (ORAC) is a generalized optimal estimation system that retrieves cloud, aerosol and volcanic ash parameters using satellite imager measurements in the visible to infrared. Use of the same algorithm for different sensors and parameters leads to consistency that facilitates inter-comparison and interaction studies. ORAC currently supports ATSR, AVHRR, MODIS and SEVIRI. In this proceeding we discuss the ORAC retrieval algorithm applied to ATSR data including the retrieval methodology, the forward model, uncertainty characterization and discrimination/classification techniques. Application of ORAC to SLSTR data is discussed including the additional features that SLSTR provides relative to the ATSR heritage. The ORAC level 2 and level 3 results are discussed and an application of level 3 results to the study of cloud/aerosol interactions is presented.

  3. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Szoeke, Simon P.

    The investigator and DOE-supported student [1] retrieved vertical air velocity and microphysical fall velocity retrieval for VOCALS and CAP-MBL homogeneous clouds. [2] Calculated in-cloud and cloud top dissipation calculation and diurnal cycle computed for VOCALS. [3] Compared CAP-MBL Doppler cloud radar scenes with (Remillard et al. 2012) automated classification.

  4. Aerosols and polar stratospheric clouds measurements during the EASOE campaign

    NASA Technical Reports Server (NTRS)

    Haner, D.; Godin, S.; Megie, G.; David, C.; Mitev, V.

    1992-01-01

    Preliminary results of observations performed using two different lidar systems during the EASOE (European Arctic Stratospheric Ozone Experiment), which has taken place in the winter of 1991-1992 in the northern hemisphere lattitude regions, are presented. The first system is a ground based multiwavelength lidar intended to perform measurements of the ozone vertical distribution in the 5 km to 40 km altitude range. It was located in Sodankyla (67 degrees N, 27 degrees E) as part of the ELSA experiment. The objectives of the ELSA cooperative project is to study the relation between polar stratospheric cloud events and ozone depletion with high vertical resolution and temporal continuity, and the evolution of the ozone distribution in relation to the position of the polar vortex. The second system is an airborne backscatter lidar (Leandre) which allows for the study of the 3-D structure and the optical properties of polar stratospheric clouds. The Leandre instrument is a dual-polarization lidar system, emitting at 532 nm, which allows for the determination of the type of clouds observed, according to the usual classification of polar stratospheric clouds. More than 60 hours of flight were performed in Dec. 1991, and Jan. and Feb. 1992 in Kiruna, Sweden. The operation of the Leandre instrument has led to the observation of the short scale variability of the Pinatubo volcanic cloud in the high latitude regions and to several episodes of polar stratospheric clouds. Preliminary analysis of the data is presented.

  5. Classification of Patient Care Complexity: Cloud Technology.

    PubMed

    de Oliveira Riboldi, Caren; Macedo, Andrea Barcellos Teixeira; Mergen, Thiane; Dias, Vera Lúcia Mendes; da Costa, Diovane Ghignatti; Malvezzi, Maria Luiza Falsarella; Magalhães, Ana Maria Muller; Silveira, Denise Tolfo

    2016-01-01

    Presentation of the computerized structure to implement, in a university hospital in the South of Brazil, the Patients Classification System of Perroca, which categorizes patients according to the care complexity. This solution also aims to corroborate a recent study at the hospital, which evidenced that the increasing workload presents a direct relation with the institutional quality indicators. The tools used were the Google applications with high productivity interconnecting the topic knowledge on behalf of the nursing professionals and information technology professionals.

  6. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  7. Smart Point Cloud: Definition and Remaining Challenges

    NASA Astrophysics Data System (ADS)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  8. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  9. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  10. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    PubMed Central

    Dai, Jin; Liu, Xin

    2014-01-01

    The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC) is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers. PMID:24711737

  11. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  12. Cloud-Scale Genomic Signals Processing for Robust Large-Scale Cancer Genomic Microarray Data Analysis.

    PubMed

    Harvey, Benjamin Simeon; Ji, Soo-Yeon

    2017-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.

  13. Cloud field classification based on textural features

    NASA Technical Reports Server (NTRS)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes of features. Preliminary results based on the GLDV textural features alone look promising.

  14. Photometry and Classification of Stars in the Direction of the Dark Cloud Tgu 619 IN Cepheus. I. a Catalog of Magnitudes, Color Indices and Spectral Types of 1304 Stars

    NASA Astrophysics Data System (ADS)

    Zdanavičius, K.; Zdanavičius, J.; Straižys, V.; Maskoliūnas, M.

    The catalog contains magnitudes and color indices of 1304 stars down to ˜ 16.6 mag in V measured in the seven-color Vilnius photometric system in the area of 1.5 square degrees with the center at Galactic coordinates 102.4°, +15.5°, containing the dark cloud TGU 619 in the Cepheus Flare. For most of the stars spectral and luminosity classes determined from the photometric data are given.

  15. Applications of UAS-SfM for coastal vulnerability assessment: Geomorphic feature extraction and land cover classification from fine-scale elevation and imagery data

    NASA Astrophysics Data System (ADS)

    Sturdivant, E. J.; Lentz, E. E.; Thieler, E. R.; Remsen, D.; Miner, S.

    2016-12-01

    Characterizing the vulnerability of coastal systems to storm events, chronic change and sea-level rise can be improved with high-resolution data that capture timely snapshots of biogeomorphology. Imagery acquired with unmanned aerial systems (UAS) coupled with structure from motion (SfM) photogrammetry can produce high-resolution topographic and visual reflectance datasets that rival or exceed lidar and orthoimagery. Here we compare SfM-derived data to lidar and visual imagery for their utility in a) geomorphic feature extraction and b) land cover classification for coastal habitat assessment. At a beach and wetland site on Cape Cod, Massachusetts, we used UAS to capture photographs over a 15-hectare coastal area with a resulting pixel resolution of 2.5 cm. We used standard SfM processing in Agisoft PhotoScan to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM). The SfM-derived products have a horizontal uncertainty of +/- 2.8 cm. Using the point cloud in an extraction routine developed for lidar data, we determined the position of shorelines, dune crests, and dune toes. We used the output imagery and DEM to map land cover with a pixel-based supervised classification. The dense and highly precise SfM point cloud enabled extraction of geomorphic features with greater detail than with lidar. The feature positions are reported with near-continuous coverage and sub-meter accuracy. The orthomosaic image produced with SfM provides visual reflectance with higher resolution than those available from aerial flight surveys, which enables visual identification of small features and thus aids the training and validation of the automated classification. We find that the high-resolution and correspondingly high density of UAS data requires some simple modifications to existing measurement techniques and processing workflows, and that the types of data and the quality provided is equivalent to, and in some cases surpasses, that of data collected using other methods.

  16. Epilepsy analytic system with cloud computing.

    PubMed

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  17. Lidar-based individual tree species classification using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  18. Object Based Image Analysis Combining High Spatial Resolution Imagery and Laser Point Clouds for Urban Land Cover

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.

  19. QoS-aware health monitoring system using cloud-based WBANs.

    PubMed

    Almashaqbeh, Ghada; Hayajneh, Thaier; Vasilakos, Athanasios V; Mohd, Bassam J

    2014-10-01

    Wireless Body Area Networks (WBANs) are amongst the best options for remote health monitoring. However, as standalone systems WBANs have many limitations due to the large amount of processed data, mobility of monitored users, and the network coverage area. Integrating WBANs with cloud computing provides effective solutions to these problems and promotes the performance of WBANs based systems. Accordingly, in this paper we propose a cloud-based real-time remote health monitoring system for tracking the health status of non-hospitalized patients while practicing their daily activities. Compared with existing cloud-based WBAN frameworks, we divide the cloud into local one, that includes the monitored users and local medical staff, and a global one that includes the outer world. The performance of the proposed framework is optimized by reducing congestion, interference, and data delivery delay while supporting users' mobility. Several novel techniques and algorithms are proposed to accomplish our objective. First, the concept of data classification and aggregation is utilized to avoid clogging the network with unnecessary data traffic. Second, a dynamic channel assignment policy is developed to distribute the WBANs associated with the users on the available frequency channels to manage interference. Third, a delay-aware routing metric is proposed to be used by the local cloud in its multi-hop communication to speed up the reporting process of the health-related data. Fourth, the delay-aware metric is further utilized by the association protocols used by the WBANs to connect with the local cloud. Finally, the system with all the proposed techniques and algorithms is evaluated using extensive ns-2 simulations. The simulation results show superior performance of the proposed architecture in optimizing the end-to-end delay, handling the increased interference levels, maximizing the network capacity, and tracking user's mobility.

  20. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  1. Modeling and parameterization of horizontally inhomogeneous cloud radiative properties

    NASA Technical Reports Server (NTRS)

    Welch, R. M.

    1995-01-01

    One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.

  2. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  3. An AVHRR Cloud Classification Database Typed by Experts

    DTIC Science & Technology

    1993-10-01

    analysis. Naval Research Laboratory, Monterey, CA. 110 pp. Gallaudet , Timothy C. and James J. Simpson, 1991: Automated cloud screening of AVHRR imagery...1987) and Saunders and Kriebel (1988a,b) have used threshold techniques to classify clouds. Gallaudet and Simpson (1991) have used split-and-merge

  4. Clouds and Climate Change. Understanding Global Change: Earth Science and Human Impacts. Global Change Instruction Program.

    ERIC Educational Resources Information Center

    Shaw, Glenn E.

    The Global Change Instruction Program was designed by college professors to fill a need for interdisciplinary materials on the emerging science of global change. This instructional module introduces the basic features and classifications of clouds and cloud cover, and explains how clouds form, what they are made of, what roles they play in…

  5. SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark

    NASA Astrophysics Data System (ADS)

    Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.

    2017-05-01

    This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.

  6. Combining Passive Microwave Rain Rate Retrieval with Visible and Infrared Cloud Classification.

    NASA Astrophysics Data System (ADS)

    Miller, Shawn William

    The relation between cloud type and rain rate has been investigated here from different approaches. Previous studies and intercomparisons have indicated that no single passive microwave rain rate algorithm is an optimal choice for all types of precipitating systems. Motivated by the upcoming Tropical Rainfall Measuring Mission (TRMM), an algorithm which combines visible and infrared cloud classification with passive microwave rain rate estimation was developed and analyzed in a preliminary manner using data from the Tropical Ocean Global Atmosphere-Coupled Ocean Atmosphere Response Experiment (TOGA-COARE). Overall correlation with radar rain rate measurements across five case studies showed substantial improvement in the combined algorithm approach when compared to the use of any single microwave algorithm. An automated neural network cloud classifier for use over both land and ocean was independently developed and tested on Advanced Very High Resolution Radiometer (AVHRR) data. The global classifier achieved strict accuracy for 82% of the test samples, while a more localized version achieved strict accuracy for 89% of its own test set. These numbers provide hope for the eventual development of a global automated cloud classifier for use throughout the tropics and the temperate zones. The localized classifier was used in conjunction with gridded 15-minute averaged radar rain rates at 8km resolution produced from the current operational network of National Weather Service (NWS) radars, to investigate the relation between cloud type and rain rate over three regions of the continental United States and adjacent waters. The results indicate a substantially lower amount of available moisture in the Front Range of the Rocky Mountains than in the Midwest or in the eastern Gulf of Mexico.

  7. Detection and Classification of Pole-Like Objects from Mobile Mapping Data

    NASA Astrophysics Data System (ADS)

    Fukano, K.; Masuda, H.

    2015-08-01

    Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.

  8. The effects of cloud inhomogeneities upon radiative fluxes, and the supply of a cloud truth validation dataset

    NASA Technical Reports Server (NTRS)

    Welch, Ronald M.

    1993-01-01

    A series of cloud and sea ice retrieval algorithms are being developed in support of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Science Team objectives. These retrievals include the following: cloud fractional area, cloud optical thickness, cloud phase (water or ice), cloud particle effective radius, cloud top heights, cloud base height, cloud top temperature, cloud emissivity, cloud 3-D structure, cloud field scales of organization, sea ice fractional area, sea ice temperature, sea ice albedo, and sea surface temperature. Due to the problems of accurately retrieving cloud properties over bright surfaces, an advanced cloud classification method was developed which is based upon spectral and textural features and artificial intelligence classifiers.

  9. A Comparative Study of YSO Classification Techniques using WISE Observations of the KR 120 Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Kang, Sung-Ju; Kerton, C. R.

    2014-01-01

    KR 120 (Sh2-187) is a small Galactic HII region located at a distance of 1.4 kpc that shows evidence for triggered star formation in the surrounding molecular cloud. We present an analysis of the young stellar object (YSO) population of the molecular cloud as determined using a variety of classification techniques. YSO candidates are selected from the WISE all sky catalog and classified as Class I, Class II and Flat based on 1) spectral index, 2) color-color or color-magnitude plots, and 3) spectral energy distribution (SED) fits to radiative transfer models. We examine the discrepancies in YSO classification between the various techniques and explore how these discrepancies lead to uncertainty in such scientifically interesting quantities such as the ratio of Class I/Class II sources and the surface density of YSOs at various stages of evolution.

  10. Using High-Resolution Satellite Observations for Evaluation of Cloud and Precipitation Statistics from Cloud-Resolving Model Simulations. Part I: South China Sea Monsoon Experiment

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Hou, A.; Lau, W. K.; Shie, C.; Tao, W.; Lin, X.; Chou, M.; Olson, W. S.; Grecu, M.

    2006-05-01

    The cloud and precipitation statistics simulated by 3D Goddard Cumulus Ensemble (GCE) model during the South China Sea Monsoon Experiment (SCSMEX) is compared with Tropical Rainfall Measuring Mission (TRMM) TMI and PR rainfall measurements and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) radiation and cloud retrievals. It is found that GCE is capable of simulating major convective system development and reproducing total surface rainfall amount as compared with rainfall estimated from the soundings. Mesoscale organization is adequately simulated except when environmental wind shear is very weak. The partitions between convective and stratiform rain are also close to TMI and PR classification. However, the model simulated rain spectrum is quite different from either TMI or PR measurements. The model produces more heavy rains and light rains (less than 0.1 mm/hr) than the observations. The model also produces heavier vertical hydrometer profiles of rain, graupel when compared with TMI retrievals and PR radar reflectivity. Comparing GCE simulated OLR and cloud properties with CERES measurements found that the model has much larger domain averaged OLR due to smaller total cloud fraction and a much skewed distribution of OLR and cloud top than CERES observations, indicating that the model's cloud field is not wide spread, consistent with the model's precipitation activity. These results will be used as guidance for improving the model's microphysics.

  11. Precipitation regimes over central Greenland inferred from 5 years of ICECAPS observations

    NASA Astrophysics Data System (ADS)

    Pettersen, Claire; Bennartz, Ralf; Merrelli, Aronne J.; Shupe, Matthew D.; Turner, David D.; Walden, Von P.

    2018-04-01

    A novel method for classifying Arctic precipitation using ground based remote sensors is presented. Using differences in the spectral variation of microwave absorption and scattering properties of cloud liquid water and ice, this method can distinguish between different types of snowfall events depending on the presence or absence of condensed liquid water in the clouds that generate the precipitation. The classification reveals two distinct, primary regimes of precipitation over the Greenland Ice Sheet (GIS): one originating from fully glaciated ice clouds and the other from mixed-phase clouds. Five years of co-located, multi-instrument data from the Integrated Characterization of Energy, Clouds, Atmospheric state, and Precipitation at Summit (ICECAPS) are used to examine cloud and meteorological properties and patterns associated with each precipitation regime. The occurrence and accumulation of the precipitation regimes are identified and quantified. Cloud and precipitation observations from additional ICECAPS instruments illustrate distinct characteristics for each regime. Additionally, reanalysis products and back-trajectory analysis show different synoptic-scale forcings associated with each regime. Precipitation over the central GIS exhibits unique microphysical characteristics due to the high surface elevations as well as connections to specific large-scale flow patterns. Snowfall originating from the ice clouds is coupled to deep, frontal cloud systems advecting up and over the southeast Greenland coast to the central GIS. These events appear to be associated with individual storm systems generated by low pressure over Baffin Bay and Greenland lee cyclogenesis. Snowfall originating from mixed-phase clouds is shallower and has characteristics typical of supercooled cloud liquid water layers, and slowly propagates from the south and southwest of Greenland along a quiescent flow above the GIS.

  12. Classification of Dual-Wavelength Airborne Laser Scanning Point Cloud Based on the Radiometric Properties of the Objects

    NASA Astrophysics Data System (ADS)

    Pilarska, M.

    2018-05-01

    Airborne laser scanning (ALS) is a well-known and willingly used technology. One of the advantages of this technology is primarily its fast and accurate data registration. In recent years ALS is continuously developed. One of the latest achievements is multispectral ALS, which consists in obtaining simultaneously the data in more than one laser wavelength. In this article the results of the dual-wavelength ALS data classification are presented. The data were acquired with RIEGL VQ-1560i sensor, which is equipped with two laser scanners operating in different wavelengths: 532 nm and 1064 nm. Two classification approaches are presented in the article: classification, which is based on geometric relationships between points and classification, which mostly relies on the radiometric properties of registered objects. The overall accuracy of the geometric classification was 86 %, whereas for the radiometric classification it was 81 %. As a result, it can be assumed that the radiometric features which are provided by the multispectral ALS have potential to be successfully used in ALS point cloud classification.

  13. A service brokering and recommendation mechanism for better selecting cloud services.

    PubMed

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).

  14. Thermal bioaerosol cloud tracking with Bayesian classification

    NASA Astrophysics Data System (ADS)

    Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.

    2017-05-01

    The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.

  15. Strategies for cloud-top phase determination: differentiation between thin cirrus clouds and snow in manual (ground truth) analyses

    NASA Astrophysics Data System (ADS)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1996-12-01

    Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.

  16. Large-scale urban point cloud labeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  17. Feature Relevance Assessment of Multispectral Airborne LIDAR Data for Tree Species Classification

    NASA Astrophysics Data System (ADS)

    Amiri, N.; Heurich, M.; Krzystek, P.; Skidmore, A. K.

    2018-04-01

    The presented experiment investigates the potential of Multispectral Laser Scanning (MLS) point clouds for single tree species classification. The basic idea is to simulate a MLS sensor by combining two different Lidar sensors providing three different wavelngthes. The available data were acquired in the summer 2016 at the same date in a leaf-on condition with an average point density of 37 points/m2. For the purpose of classification, we segmented the combined 3D point clouds consisiting of three different spectral channels into 3D clusters using Normalized Cut segmentation approach. Then, we extracted four group of features from the 3D point cloud space. Once a varity of features has been extracted, we applied forward stepwise feature selection in order to reduce the number of irrelevant or redundant features. For the classification, we used multinomial logestic regression with L1 regularization. Our study is conducted using 586 ground measured single trees from 20 sample plots in the Bavarian Forest National Park, in Germany. Due to lack of reference data for some rare species, we focused on four classes of species. The results show an improvement between 4-10 pp for the tree species classification by using MLS data in comparison to a single wavelength based approach. A cross validated (15-fold) accuracy of 0.75 can be achieved when all feature sets from three different spectral channels are used. Our results cleary indicates that the use of MLS point clouds has great potential to improve detailed forest species mapping.

  18. On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.

    2004-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.

  19. Detection and Retrieval of Multi-Layered Cloud Properties Using Satellite Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jian-Ping; Nguyen, Louis; Khaiyer, Mandana M.

    2005-01-01

    Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.

  20. Detection and retrieval of multi-layered cloud properties using satellite data

    NASA Astrophysics Data System (ADS)

    Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jianping; Nguyen, Louis; Khaiyer, Mandana M.

    2005-10-01

    Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.

  1. Cloud and aerosol studies using combined CPL and MAS data

    NASA Astrophysics Data System (ADS)

    Vaughan, Mark A.; Rodier, Sharon; Hu, Yongxiang; McGill, Matthew J.; Holz, Robert E.

    2004-11-01

    Current uncertainties in the role of aerosols and clouds in the Earth's climate system limit our abilities to model the climate system and predict climate change. These limitations are due primarily to difficulties of adequately measuring aerosols and clouds on a global scale. The A-train satellites (Aqua, CALIPSO, CloudSat, PARASOL, and Aura) will provide an unprecedented opportunity to address these uncertainties. The various active and passive sensors of the A-train will use a variety of measurement techniques to provide comprehensive observations of the multi-dimensional properties of clouds and aerosols. However, to fully achieve the potential of this ensemble requires a robust data analysis framework to optimally and efficiently map these individual measurements into a comprehensive set of cloud and aerosol physical properties. In this work we introduce the Multi-Instrument Data Analysis and Synthesis (MIDAS) project, whose goal is to develop a suite of physically sound and computationally efficient algorithms that will combine active and passive remote sensing data in order to produce improved assessments of aerosol and cloud radiative and microphysical properties. These algorithms include (a) the development of an intelligent feature detection algorithm that combines inputs from both active and passive sensors, and (b) identifying recognizable multi-instrument signatures related to aerosol and cloud type derived from clusters of image pixels and the associated vertical profile information. Classification of these signatures will lead to the automated identification of aerosol and cloud types. Testing of these new algorithms is done using currently existing and readily available active and passive measurements from the Cloud Physics Lidar and the MODIS Airborne Simulator, which simulate, respectively, the CALIPSO and MODIS A-train instruments.

  2. The effects of cloud inhomogeneities upon radiative fluxes, and the supply of a cloud truth validation dataset

    NASA Technical Reports Server (NTRS)

    Welch, Ronald M.

    1996-01-01

    The ASTER polar cloud mask algorithm is currently under development. Several classification techniques have been developed and implemented. The merits and accuracy of each are being examined. The classification techniques under investigation include fuzzy logic, hierarchical neural network, and a pairwise histogram comparison scheme based on sample histograms called the Paired Histogram Method. Scene adaptive methods also are being investigated as a means to improve classifier performance. The feature, arctan of Band 4 and Band 5, and the Band 2 vs. Band 4 feature space are key to separating frozen water (e.g., ice/snow, slush/wet ice, etc.) from cloud over frozen water, and land from cloud over land, respectively. A total of 82 Landsat TM circumpolar scenes are being used as a basis for algorithm development and testing. Numerous spectral features are being tested and include the 7 basic Landsat TM bands, in addition to ratios, differences, arctans, and normalized differences of each combination of bands. A technique for deriving cloud base and top height is developed. It uses 2-D cross correlation between a cloud edge and its corresponding shadow to determine the displacement of the cloud from its shadow. The height is then determined from this displacement, the solar zenith angle, and the sensor viewing angle.

  3. An Intercomparison Between Radar Reflectivity and the IR Cloud Classification Technique for the TOGA-COARE Area

    NASA Technical Reports Server (NTRS)

    Carvalho, L. M. V.; Rickenbach, T.

    1999-01-01

    Satellite infrared (IR) and visible (VIS) images from the Tropical Ocean Global Atmosphere - Coupled Ocean Atmosphere Response Experiment (TOGA-COARE) experiment are investigated through the use of Clustering Analysis. The clusters are obtained from the values of IR and VIS counts and the local variance for both channels. The clustering procedure is based on the standardized histogram of each variable obtained from 179 pairs of images. A new approach to classify high clouds using only IR and the clustering technique is proposed. This method allows the separation of the enhanced convection in two main classes: convective tops, more closely related to the most active core of the storm, and convective systems, which produce regions of merged, thick anvil clouds. The resulting classification of different portions of cloudiness is compared to the radar reflectivity field for intensive events. Convective Systems and Convective Tops are followed during their life cycle using the IR clustering method. The areal coverage of precipitation and features related to convective and stratiform rain is obtained from the radar for each stage of the evolving Mesoscale Convective Systems (MCS). In order to compare the IR clustering method with a simple threshold technique, two IR thresholds (Tir) were used to identify different portions of cloudiness, Tir=240K which roughly defines the extent of all cloudiness associated with the MCS, and Tir=220K which indicates the presence of deep convection. It is shown that the IR clustering technique can be used as a simple alternative to identify the actual portion of convective and stratiform rainfall.

  4. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    PubMed Central

    Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert

    2008-01-01

    Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements directly, i.e. the acquired points. Gridding of the data is not necessary, a process which is inherently coupled to loss of data and precision. The 3D properties provide especially a good separability of buildings and terrain points respectively, if they are occluded by vegetation. PMID:27873771

  5. Analysis On Land Cover In Municipality Of Malang With Landsat 8 Image Through Unsupervised Classification

    NASA Astrophysics Data System (ADS)

    Nahari, R. V.; Alfita, R.

    2018-01-01

    Remote sensing technology has been widely used in the geographic information system in order to obtain data more quickly, accurately and affordably. One of the advantages of using remote sensing imagery (satellite imagery) is to analyze land cover and land use. Satellite image data used in this study were images from the Landsat 8 satellite combined with the data from the Municipality of Malang government. The satellite image was taken in July 2016. Furthermore, the method used in this study was unsupervised classification. Based on the analysis towards the satellite images and field observations, 29% of the land in the Municipality of Malang was plantation, 22% of the area was rice field, 12% was residential area, 10% was land with shrubs, and the remaining 2% was water (lake/reservoir). The shortcoming of the methods was 25% of the land in the area was unidentified because it was covered by cloud. It is expected that future researchers involve cloud removal processing to minimize unidentified area.

  6. A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services

    PubMed Central

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937

  7. Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data

    NASA Astrophysics Data System (ADS)

    Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.

    2014-12-01

    At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Min; Kollias, Pavlos; Feng, Zhe

    The motivation for this research is to develop a precipitation classification and rain rate estimation method using cloud radar-only measurements for Atmospheric Radiation Measurement (ARM) long-term cloud observation analysis, which are crucial and unique for studying cloud lifecycle and precipitation features under different weather and climate regimes. Based on simultaneous and collocated observations of the Ka-band ARM zenith radar (KAZR), two precipitation radars (NCAR S-PolKa and Texas A&M University SMART-R), and surface precipitation during the DYNAMO/AMIE field campaign, a new cloud radar-only based precipitation classification and rain rate estimation method has been developed and evaluated. The resulting precipitation classification ismore » equivalent to those collocated SMART-R and S-PolKa observations. Both cloud and precipitation radars detected about 5% precipitation occurrence during this period. The convective (stratiform) precipitation fraction is about 18% (82%). The 2-day collocated disdrometer observations show an increased number concentration of large raindrops in convective rain compared to dominant concentration of small raindrops in stratiform rain. The composite distributions of KAZR reflectivity and Doppler velocity also show two distinct structures for convective and stratiform rain. These indicate that the method produces physically consistent results for two types of rain. The cloud radar-only rainfall estimation is developed based on the gradient of accumulative radar reflectivity below 1 km, near-surface Ze, and collocated surface rainfall (R) measurement. The parameterization is compared with the Z-R exponential relation. The relative difference between estimated and surface measured rainfall rate shows that the two-parameter relation can improve rainfall estimation.« less

  9. Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System

    NASA Astrophysics Data System (ADS)

    Hong, Yang

    Precipitation estimation from satellite information (VISIBLE , IR, or microwave) is becoming increasingly imperative because of its high spatial/temporal resolution and board coverage unparalleled by ground-based data. After decades' efforts of rainfall estimation using IR imagery as basis, it has been explored and concluded that the limitations/uncertainty of the existing techniques are: (1) pixel-based local-scale feature extraction; (2) IR temperature threshold to define rain/no-rain clouds; (3) indirect relationship between rain rate and cloud-top temperature; (4) lumped techniques to model high variability of cloud-precipitation processes; (5) coarse scales of rainfall products. As continuing studies, a new version of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network (PERSIANN), called Cloud Classification System (CCS), has been developed to cope with these limitations in this dissertation. CCS includes three consecutive components: (1) a hybrid segmentation algorithm, namely Hierarchically Topographical Thresholding and Stepwise Seeded Region Growing (HTH-SSRG), to segment satellite IR images into separated cloud patches; (2) a 3D feature extraction procedure to retrieve both pixel-based local-scale and patch-based large-scale features of cloud patch at various heights; (3) an ANN model, Self-Organizing Nonlinear Output (SONO) network, to classify cloud patches into similarity-based clusters, using Self-Organizing Feature Map (SOFM), and then calibrate hundreds of multi-parameter nonlinear functions to identify the relationship between every cloud types and their underneath precipitation characteristics using Probability Matching Method and Multi-Start Downhill Simplex optimization techniques. The model was calibrated over the Southwest of United States (100°--130°W and 25°--45°N) first and then adaptively adjusted to the study region of North America Monsoon Experiment (65°--135°W and 10°--50°N) using observations from Geostationary Operational Environmental Satellite (GOES) IR imagery, Next Generation Radar (NEXRAD) rainfall network, and Tropical Rainfall Measurement Mission (TRMM) microwave rain rate estimates. CCS functions as a distributed model that first identifies cloud patches and then dispatches different but the best matching cloud-precipitation function for each cloud patch to estimate instantaneous rain rate at high spatial resolution (4km) and full temporal resolution of GOES IR images (every 30-minute). Evaluated over a range of spatial and temporal scales, the performance of CCS compared favorably with GOES Precipitation Index (GPI), Universal Adjusted GPI (UAGPI), PERSIANN, and Auto-Estimator (AE) algorithms, consistently. Particularly, the large number of nonlinear functions and optimum IR-rain rate thresholds of CCS model are highly variable, reflecting the complexity of dominant cloud-precipitation processes from cloud patch to cloud patch over various regions. As a result, CCS can more successfully capture variability in rain rate at small scales than existing algorithms and potentially provides rainfall product from GOES IR-NEXARD-TRMM TMI (SSM/I) at 0.12° x 0.12° and 3-hour resolution with relative low standard error (˜=3.0mm/hr) and high correlation coefficient (˜=0.65).

  10. Seasonal Surface Spectral Emissivity Derived from Terra MODIS Data

    NASA Technical Reports Server (NTRS)

    Sun-Mack, Sunny; Chen, Yan; Minnis, Patrick; Young, DavidF.; Smith, William J., Jr.

    2004-01-01

    The CERES (Clouds and the Earth's Radiant Energy System) Project is measuring broadband shortwave and longwave radiances and deriving cloud properties form various images to produce a combined global radiation and cloud property data set. In this paper, simultaneous data from Terra MODIS (Moderate Resolution Imaging Spectroradiometer) taken at 3.7, 8.5, 11.0, and 12.0 m are used to derive the skin temperature and the surface emissivities at the same wavelengths. The methodology uses separate measurements of clear sky temperature in each channel determined by scene classification during the daytime and at night. The relationships between the various channels at night are used during the day when solar reflectance affects the 3.7- m radiances. A set of simultaneous equations is then solved to derive the emissivities. Global monthly emissivity maps are derived from Terra MODIS data while numerical weather analyses provide soundings for correcting the observed radiances for atmospheric absorption. These maps are used by CERES and other cloud retrieval algorithms.

  11. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  12. Cloud field classification based upon high spatial resolution textural features. I - Gray level co-occurrence matrix approach

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1988-01-01

    Stratocumulus, cumulus, and cirrus clouds were identified on the basis of cloud textural features which were derived from a single high-resolution Landsat MSS NIR channel using a stepwise linear discriminant analysis. It is shown that, using this method, it is possible to distinguish high cirrus clouds from low clouds with high accuracy on the basis of spatial brightness patterns. The largest probability of misclassification is associated with confusion between the stratocumulus breakup regions and the fair-weather cumulus.

  13. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  14. Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas

    NASA Astrophysics Data System (ADS)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2016-06-01

    We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.

  15. Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data

    NASA Astrophysics Data System (ADS)

    Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.

    2016-06-01

    Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.

  16. The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Smith, William L.; Ebert, Elizabeth

    1990-01-01

    The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed.

  17. Opalescent and cloudy fruit juices: formation and particle stability.

    PubMed

    Beveridge, Tom

    2002-07-01

    Cloudy fruit juices, particularly from tropical fruit, are becoming a fast-growing part of the fruit juice sector. The classification of cloud as coarse and fine clouds by centrifugation and composition of cloud from apple, pineapple, orange, guava, and lemon juice are described. Fine particulate is shown to be the true stable cloud and to contain considerable protein, carbohydrate, and lipid components. Often, tannin is present as well. The fine cloud probably arises from cell membranes and appears not to be simply cell debris. Factors relating to the stability of fruit juice cloud, including particle sizes, size distribution, and density, are described and discussed. Factors promoting stable cloud in juice are presented.

  18. Soil, water, and vegetation conditions in south Texas

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J.; Everitt, J. H.; Gerbermann, A. H. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Software development for a computer-aided crop and soil survey system is nearing completion. Computer-aided variety classification accuracies using LANDSAT-1 MSS data for a 600 hectare citrus farm were 83% for Redblush grapefruit and 91% for oranges. These accuracies indicate that there is good potential for computer-aided inventories of grapefruit and orange citrus orchards with LANDSAT-type MSS data. Mean digital values of clouds differed statistically from those for crop, soil, and water entities, and those for cloud shadows were enough lower than sunlit crop and soil to be distinguishable. The standard errors of estimate for the calibration of computer compatible tape coordinate system (pixel and record) to earth coordinate system (longitude and latitude) for 6 LANDSAT scenes ranged from 0.72 to 1.50 pixels and from 0.58 to 1.75 records.

  19. Cloud classification in polar regions using AVHRR textural and spectral signatures

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Weger, R. C.; Christopher, S. A.; Kuo, K. S.; Carsey, F. D.

    1990-01-01

    Arctic clouds and ice-covered surfaces are classified on the basis of textural and spectral features obtained with AVHRR 1.1-km spatial resolution imagery over the Beaufort Sea during May-October, 1989. Scenes were acquired about every 5 days, for a total of 38 cases. A list comprising 20 arctic-surface and cloud classes is compiled using spectral measures defined by Garand (1988).

  20. CMSAF products Cloud Fraction Coverage and Cloud Type used for solar global irradiance estimation

    NASA Astrophysics Data System (ADS)

    Badescu, Viorel; Dumitrescu, Alexandru

    2016-08-01

    Two products provided by the climate monitoring satellite application facility (CMSAF) are the instantaneous Cloud Fractional Coverage (iCFC) and the instantaneous Cloud Type (iCTY) products. Previous studies based on the iCFC product show that the simple solar radiation models belonging to the cloudiness index class n CFC = 0.1-1.0 have rRMSE values ranging between 68 and 71 %. The products iCFC and iCTY are used here to develop simple models providing hourly estimates for solar global irradiance. Measurements performed at five weather stations of Romania (South-Eastern Europe) are used. Two three-class characterizations of the state-of-the-sky, based on the iCTY product, are defined. In case of the first new sky state classification, which is roughly related with cloud altitude, the solar radiation models proposed here perform worst for the iCTY class 4-15, with rRMSE values ranging between 46 and 57 %. The spreading error of the simple models is lower than that of the MAGIC model for the iCTY classes 1-4 and 15-19, but larger for iCTY classes 4-15. In case of the second new sky state classification, which takes into account in a weighted manner the chance for the sun to be covered by different types of clouds, the solar radiation models proposed here perform worst for the cloudiness index class n CTY = 0.7-0.1, with rRMSE values ranging between 51 and 66 %. Therefore, the two new sky state classifications based on the iCTY product are useful in increasing the accuracy of solar radiation models.

  1. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be executed "near" the multi-sensor data. Decade-long, multi-sensor studies can be performed without pre-staging data, with the researcher paying only his own Cloud compute bill.

  2. Mapping forest tree species over large areas with partially cloudy Landsat imagery

    NASA Astrophysics Data System (ADS)

    Turlej, K.; Radeloff, V.

    2017-12-01

    Forests provide numerous services to natural systems and humankind, but which services forest provide depends greatly on their tree species composition. That makes it important to track not only changes in forest extent, something that remote sensing excels in, but also to map tree species. The main goal of our work was to map tree species with Landsat imagery, and to identify how to maximize mapping accuracy by including partially cloudy imagery. Our study area covered one Landsat footprint (26/28) in Northern Wisconsin, USA, with temperate and boreal forests. We selected this area because it contains numerous tree species and variable forest composition providing an ideal study area to test the limits of Landsat data. We quantified how species-level classification accuracy was affected by a) the number of acquisitions, b) the seasonal distribution of observations, and c) the amount of cloud contamination. We classified a single year stack of Landsat-7, and -8 images data with a decision tree algorithm to generate a map of dominant tree species at the pixel- and stand-level. We obtained three important results. First, we achieved producer's accuracies in the range 70-80% and user's accuracies in range 80-90% for the most abundant tree species in our study area. Second, classification accuracy improved with more acquisitions, when observations were available from all seasons, and is the best when images with up to 40% cloud cover are included. Finally, classifications for pure stands were 10 to 30 percentage points better than those for mixed stands. We conclude that including partially cloudy Landsat imagery allows to map forest tree species with accuracies that were previously only possible for rare years with many cloud-free observations. Our approach thus provides important information for both forest management and science.

  3. Multi-Sensor Investigation of a Regional High-Arctic Cloudy Event

    NASA Astrophysics Data System (ADS)

    Ivanescu, L.; O'Neill, N. T.; Blanchet, J. P.; Baibakov, K.; Chaubey, J. P.; Perro, C. W.; Duck, T. J.

    2014-12-01

    A regional high-Arctic cloud event observed in March, 2011 at the PEARL Observatory, near the Eureka Weather Station (80°N, 86°W), was investigated with a view to better understanding cloud formation mechanisms during the Polar night. We analysed the temporal cloud evolution with a suite of nighttime, ground-based remote sensing (RS) instruments, supplemented by radiosonde profiles and surface weather measurements. The RS suite included Raman lidar, cloud radar, a star-photometer and microwave-radiometers. In order to estimate the spatial extent and vertical variability of the cloud mass, we employed satellite-based lidar (CALIPSO) and radar (CloudSat) profiles in the regional neighbourhood of Eureka (at a latitude of 80°N, Eureka benefits from a high frequency of CALIPSO and CloudSat overpasses). The ground-based and satellite-based observations provide quantitative measurements of extensive (bulk) properties (cloud and aerosol optical depths), and intensive (per particle properties) such as aerosol and cloud particle size as well as shape, density and aggregation phase of the cloud particulates. All observations were then compared with the upper atmosphere NCEP/NCAR reanalyses in order to understand better the synoptic context of the cloud mass dynamics as a function of key meteorological parameters such as upper air temperature and water vapor circulation. Preliminary results indicated the presence of a particular type of thin ice cloud (TIC-2) associated with a deep and stable atmospheric low. A classification into small and large ice crystal size (< 40 μm and > 40 μm, respectively), identifies the clouds as TIC-1 or TIC-2. This classification is hypothesized to be associated with the nature of the aerosols (non-anthropogenic versus anthropogenic) serving as ice nuclei in their formation. Such a distinction has important implications on the initiation of precipitation, removal rate of the cloud particles and, in consequence, the radiative forcing properties on a regional basis.

  4. BisQue: cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.

    2016-02-01

    Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.

  5. Physical modeling of 3D and 4D laser imaging

    NASA Astrophysics Data System (ADS)

    Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard

    2010-04-01

    Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.

  6. SECURE INTERNET OF THINGS-BASED CLOUD FRAMEWORK TO CONTROL ZIKA VIRUS OUTBREAK.

    PubMed

    Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar

    2017-01-01

    Zika virus (ZikaV) is currently one of the most important emerging viruses in the world which has caused outbreaks and epidemics and has also been associated with severe clinical manifestations and congenital malformations. Traditional approaches to combat the ZikaV outbreak are not effective for detection and control. The aim of this study is to propose a cloud-based system to prevent and control the spread of Zika virus disease using integration of mobile phones and Internet of Things (IoT). A Naive Bayesian Network (NBN) is used to diagnose the possibly infected users, and Google Maps Web service is used to provide the geographic positioning system (GPS)-based risk assessment to prevent the outbreak. It is used to represent each ZikaV infected user, mosquito-dense sites, and breeding sites on the Google map that helps the government healthcare authorities to control such risk-prone areas effectively and efficiently. The performance and accuracy of the proposed system are evaluated using dataset for 2 million users. Our system provides high accuracy for initial diagnosis of different users according to their symptoms and appropriate GPS-based risk assessment. The cloud-based proposed system contributed to the accurate NBN-based classification of infected users and accurate identification of risk-prone areas using Google Maps.

  7. Preliminary Findings of Inflight Icing Field Test to Support Icing Remote Sensing Technology Assessment

    NASA Technical Reports Server (NTRS)

    King, Michael; Reehorst, Andrew; Serke, Dave

    2015-01-01

    NASA and the National Center for Atmospheric Research have developed an icing remote sensing technology that has demonstrated skill at detecting and classifying icing hazards in a vertical column above an instrumented ground station. This technology has recently been extended to provide volumetric coverage surrounding an airport. Building on the existing vertical pointing system, the new method for providing volumetric coverage will utilize a vertical pointing cloud radar, a multifrequency microwave radiometer with azimuth and elevation pointing, and a NEXRAD radar. The new terminal area icing remote sensing system processes the data streams from these instruments to derive temperature, liquid water content, and cloud droplet size for each examined point in space. These data are then combined to ultimately provide icing hazard classification along defined approach paths into an airport.

  8. Characterizing Sorghum Panicles using 3D Point Clouds

    NASA Astrophysics Data System (ADS)

    Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.

    2017-12-01

    To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.

  9. The MJO Transition from Shallow to Deep Convection in CloudSat/CALIPSO Data and GISS GCM Simulations

    NASA Technical Reports Server (NTRS)

    DelGenio, Anthony G.; Chen, Yonghua; Kim, Daehyun; Yao, Mao-Sung

    2013-01-01

    The relationship between convective penetration depth and tropospheric humidity is central to recent theories of the Madden-Julian oscillation (MJO). It has been suggested that general circulation models (GCMs) poorly simulate the MJO because they fail to gradually moisten the troposphere by shallow convection and simulate a slow transition to deep convection. CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data are analyzed to document the variability of convection depth and its relation to water vapor during the MJO transition from shallow to deep convection and to constrain GCM cumulus parameterizations. Composites of cloud occurrence for 10MJO events show the following anticipatedMJO cloud structure: shallow and congestus clouds in advance of the peak, deep clouds near the peak, and upper-level anvils after the peak. Cirrus clouds are also frequent in advance of the peak. The Advanced Microwave Scanning Radiometer for EarthObserving System (EOS) (AMSR-E) columnwater vapor (CWV) increases by;5 mmduring the shallow- deep transition phase, consistent with the idea of moisture preconditioning. Echo-top height of clouds rooted in the boundary layer increases sharply with CWV, with large variability in depth when CWV is between;46 and 68 mm. International Satellite Cloud Climatology Project cloud classifications reproduce these climatological relationships but correctly identify congestus-dominated scenes only about half the time. A version of the Goddard Institute for Space Studies Model E2 (GISS-E2) GCM with strengthened entrainment and rain evaporation that produces MJO-like variability also reproduces the shallow-deep convection transition, including the large variability of cloud-top height at intermediate CWV values. The variability is due to small grid-scale relative humidity and lapse rate anomalies for similar values of CWV. 1.

  10. Diagnosing Cloud Biases in the GFDL AM3 Model With Atmospheric Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Stuart; Marchand, Roger; Ackerman, Thomas

    In this paper, we define a set of 21 atmospheric states, or recurring weather patterns, for a region surrounding the Atmospheric Radiation Measurement Program's Southern Great Plains site using an iterative clustering technique. The states are defined using dynamic and thermodynamic variables from reanalysis, tested for statistical significance with cloud radar data from the Southern Great Plains site, and are determined every 6 h for 14 years, creating a time series of atmospheric state. The states represent the various stages of the progression of synoptic systems through the region (e.g., warm fronts, warm sectors, cold fronts, cold northerly advection, andmore » high-pressure anticyclones) with a subset of states representing summertime conditions with varying degrees of convective activity. We use the states to classify output from the NOAA/Geophysical Fluid Dynamics Laboratory AM3 model to test the model's simulation of the frequency of occurrence of the states and of the cloud occurrence during each state. The model roughly simulates the frequency of occurrence of the states but exhibits systematic cloud occurrence biases. Comparison of observed and model-simulated International Satellite Cloud Climatology Project histograms of cloud top pressure and optical thickness shows that the model lacks high thin cloud under all conditions, but biases in thick cloud are state-dependent. Frontal conditions in the model do not produce enough thick cloud, while fair-weather conditions produce too much. Finally, we find that increasing the horizontal resolution of the model improves the representation of thick clouds under all conditions but has little effect on high thin clouds. However, increasing resolution also changes the distribution of states, causing an increase in total cloud occurrence bias.« less

  11. Diagnosing Cloud Biases in the GFDL AM3 Model With Atmospheric Classification

    NASA Astrophysics Data System (ADS)

    Evans, Stuart; Marchand, Roger; Ackerman, Thomas; Donner, Leo; Golaz, Jean-Christophe; Seman, Charles

    2017-12-01

    We define a set of 21 atmospheric states, or recurring weather patterns, for a region surrounding the Atmospheric Radiation Measurement Program's Southern Great Plains site using an iterative clustering technique. The states are defined using dynamic and thermodynamic variables from reanalysis, tested for statistical significance with cloud radar data from the Southern Great Plains site, and are determined every 6 h for 14 years, creating a time series of atmospheric state. The states represent the various stages of the progression of synoptic systems through the region (e.g., warm fronts, warm sectors, cold fronts, cold northerly advection, and high-pressure anticyclones) with a subset of states representing summertime conditions with varying degrees of convective activity. We use the states to classify output from the NOAA/Geophysical Fluid Dynamics Laboratory AM3 model to test the model's simulation of the frequency of occurrence of the states and of the cloud occurrence during each state. The model roughly simulates the frequency of occurrence of the states but exhibits systematic cloud occurrence biases. Comparison of observed and model-simulated International Satellite Cloud Climatology Project histograms of cloud top pressure and optical thickness shows that the model lacks high thin cloud under all conditions, but biases in thick cloud are state-dependent. Frontal conditions in the model do not produce enough thick cloud, while fair-weather conditions produce too much. We find that increasing the horizontal resolution of the model improves the representation of thick clouds under all conditions but has little effect on high thin clouds. However, increasing resolution also changes the distribution of states, causing an increase in total cloud occurrence bias.

  12. Diagnosing Cloud Biases in the GFDL AM3 Model With Atmospheric Classification

    DOE PAGES

    Evans, Stuart; Marchand, Roger; Ackerman, Thomas; ...

    2017-11-16

    In this paper, we define a set of 21 atmospheric states, or recurring weather patterns, for a region surrounding the Atmospheric Radiation Measurement Program's Southern Great Plains site using an iterative clustering technique. The states are defined using dynamic and thermodynamic variables from reanalysis, tested for statistical significance with cloud radar data from the Southern Great Plains site, and are determined every 6 h for 14 years, creating a time series of atmospheric state. The states represent the various stages of the progression of synoptic systems through the region (e.g., warm fronts, warm sectors, cold fronts, cold northerly advection, andmore » high-pressure anticyclones) with a subset of states representing summertime conditions with varying degrees of convective activity. We use the states to classify output from the NOAA/Geophysical Fluid Dynamics Laboratory AM3 model to test the model's simulation of the frequency of occurrence of the states and of the cloud occurrence during each state. The model roughly simulates the frequency of occurrence of the states but exhibits systematic cloud occurrence biases. Comparison of observed and model-simulated International Satellite Cloud Climatology Project histograms of cloud top pressure and optical thickness shows that the model lacks high thin cloud under all conditions, but biases in thick cloud are state-dependent. Frontal conditions in the model do not produce enough thick cloud, while fair-weather conditions produce too much. Finally, we find that increasing the horizontal resolution of the model improves the representation of thick clouds under all conditions but has little effect on high thin clouds. However, increasing resolution also changes the distribution of states, causing an increase in total cloud occurrence bias.« less

  13. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery

    USGS Publications Warehouse

    Helmer, E.H.; Kennaway, T.A.; Pedreros, D.H.; Clark, M.L.; Marcano-Vega, H.; Tieszen, L.L.; Ruzycki, T.R.; Schill, S.R.; Carrington, C.M.S.

    2008-01-01

    Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius, testing a more detailed classification than earlier work in the latter three islands. Secondly, we estimate the extents of land cover and protected forest by formation for five islands and ask how land cover has changed over the second half of the 20th century. The image interpretation approach combines image mosaics and ancillary geographic data, classifying the resulting set of raster data with decision tree software. Cloud-free image mosaics for one or two seasons were created by applying regression tree normalization to scene dates that could fill cloudy areas in a base scene. Such mosaics are also known as cloud-filled, cloud-minimized or cloud-cleared imagery, mosaics, or composites. The approach accurately distinguished several classes that more standard methods would confuse; the seamless mosaics aided reference data collection; and the multiseason imagery allowed us to separate drought deciduous forests and woodlands from semi-deciduous ones. Cultivated land areas declined 60 to 100 percent from about 1945 to 2000 on several islands. Meanwhile, forest cover has increased 50 to 950%. This trend will likely continue where sugar cane cultivation has dominated. Like the island of Puerto Rico, most higher-elevation forest formations are protected in formal or informal reserves. Also similarly, lowland forests, which are drier forest types on these islands, are not well represented in reserves. Former cultivated lands in lowland areas could provide lands for new reserves of drier forest types. The land-use history of these islands may provide insight for planners in countries currently considering lowland forest clearing for agriculture. Copyright 2008 College of Arts and Sciences.

  14. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  15. Small negative cloud-to-ground lightning reports at the NASA Kennedy Space Center and Air Force Eastern Range

    NASA Astrophysics Data System (ADS)

    Wilson, Jennifer G.; Cummins, Kenneth L.; Krider, E. Philip

    2009-12-01

    The NASA Kennedy Space Center (KSC) and Air Force Eastern Range (ER) use data from two cloud-to-ground (CG) lightning detection networks, the Cloud-to-Ground Lightning Surveillance System (CGLSS) and the U.S. National Lightning Detection Network™ (NLDN), and a volumetric lightning mapping array, the Lightning Detection and Ranging (LDAR) system, to monitor and characterize lightning that is potentially hazardous to launch or ground operations. Data obtained from these systems during June-August 2006 have been examined to check the classification of small, negative CGLSS reports that have an estimated peak current, ∣Ip∣ less than 7 kA, and to determine the smallest values of Ip that are produced by first strokes, by subsequent strokes that create a new ground contact (NGC), and by subsequent strokes that remain in a preexisting channel (PEC). The results show that within 20 km of the KSC-ER, 21% of the low-amplitude negative CGLSS reports were produced by first strokes, with a minimum Ip of -2.9 kA; 31% were by NGCs, with a minimum Ip of -2.0 kA; and 14% were by PECs, with a minimum Ip of -2.2 kA. The remaining 34% were produced by cloud pulses or lightning events that we were not able to classify.

  16. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  17. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    PubMed Central

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-01-01

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669

  18. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.

    PubMed

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-12-15

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  19. CIMIDx: Prototype for a Cloud-Based System to Support Intelligent Medical Image Diagnosis With Efficiency.

    PubMed

    Bhavani, Selvaraj Rani; Senthilkumar, Jagatheesan; Chilambuchelvan, Arul Gnanaprakasam; Manjula, Dhanabalachandran; Krishnamoorthy, Ramasamy; Kannan, Arputharaj

    2015-03-27

    The Internet has greatly enhanced health care, helping patients stay up-to-date on medical issues and general knowledge. Many cancer patients use the Internet for cancer diagnosis and related information. Recently, cloud computing has emerged as a new way of delivering health services but currently, there is no generic and fully automated cloud-based self-management intervention for breast cancer patients, as practical guidelines are lacking. We investigated the prevalence and predictors of cloud use for medical diagnosis among women with breast cancer to gain insight into meaningful usage parameters to evaluate the use of generic, fully automated cloud-based self-intervention, by assessing how breast cancer survivors use a generic self-management model. The goal of this study was implemented and evaluated with a new prototype called "CIMIDx", based on representative association rules that support the diagnosis of medical images (mammograms). The proposed Cloud-Based System Support Intelligent Medical Image Diagnosis (CIMIDx) prototype includes two modules. The first is the design and development of the CIMIDx training and test cloud services. Deployed in the cloud, the prototype can be used for diagnosis and screening mammography by assessing the cancers detected, tumor sizes, histology, and stage of classification accuracy. To analyze the prototype's classification accuracy, we conducted an experiment with data provided by clients. Second, by monitoring cloud server requests, the CIMIDx usage statistics were recorded for the cloud-based self-intervention groups. We conducted an evaluation of the CIMIDx cloud service usage, in which browsing functionalities were evaluated from the end-user's perspective. We performed several experiments to validate the CIMIDx prototype for breast health issues. The first set of experiments evaluated the diagnostic performance of the CIMIDx framework. We collected medical information from 150 breast cancer survivors from hospitals and health centers. The CIMIDx prototype achieved high sensitivity of up to 99.29%, and accuracy of up to 98%. The second set of experiments evaluated CIMIDx use for breast health issues, using t tests and Pearson chi-square tests to assess differences, and binary logistic regression to estimate the odds ratio (OR) for the predictors' use of CIMIDx. For the prototype usage statistics for the same 150 breast cancer survivors, we interviewed 114 (76.0%), through self-report questionnaires from CIMIDx blogs. The frequency of log-ins/person ranged from 0 to 30, total duration/person from 0 to 1500 minutes (25 hours). The 114 participants continued logging in to all phases, resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). The overall performance of the prototype for the good category, reported usefulness of the prototype (P=.77), overall satisfaction of the prototype (P=.31), ease of navigation (P=.89), user friendliness evaluation (P=.31), and overall satisfaction (P=.31). Positive evaluations given by 100 participants via a Web-based questionnaire supported our hypothesis. The present study shows that women felt favorably about the use of a generic fully automated cloud-based self- management prototype. The study also demonstrated that the CIMIDx prototype resulted in the detection of more cancers in screening and diagnosing patients, with an increased accuracy rate.

  20. Automatic Road Sign Inventory Using Mobile Mapping Systems

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.

    2016-06-01

    The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.

  1. Evolution in Cloud Population Statistics of the MJO: From AMIE Field Observations to Global-Cloud Permitting Models Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    This is a multi-institutional, collaborative project using a three-tier modeling approach to bridge field observations and global cloud-permitting models, with emphases on cloud population structural evolution through various large-scale environments. Our contribution was in data analysis for the generation of high value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: the development of a synergistic cloud and precipitation cloud classification that identify different cloud (e.g. shallow cumulus, cirrus) and precipitation types (shallow, deep, convective, stratiform) using profiling ARM observations and the development of a quantitative precipitation ratemore » retrieval algorithm using profiling ARM observations. Similar efforts have been developed in the past for precipitation (weather radars), but not for the millimeter-wavelength (cloud) radar deployed at the ARM sites.« less

  2. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  3. Aerosol properties, source identification, and cloud processing in orographic clouds measured by single particle mass spectrometry on a central European mountain site during HCCT-2010

    NASA Astrophysics Data System (ADS)

    Roth, A.; Schneider, J.; Klimach, T.; Mertes, S.; van Pinxteren, D.; Herrmann, H.; Borrmann, S.

    2016-01-01

    Cloud residues and out-of-cloud aerosol particles with diameters between 150 and 900 nm were analysed by online single particle aerosol mass spectrometry during the 6-week study Hill Cap Cloud Thuringia (HCCT)-2010 in September-October 2010. The measurement location was the mountain Schmücke (937 m a.s.l.) in central Germany. More than 160 000 bipolar mass spectra from out-of-cloud aerosol particles and more than 13 000 bipolar mass spectra from cloud residual particles were obtained and were classified using a fuzzy c-means clustering algorithm. Analysis of the uncertainty of the sorting algorithm was conducted on a subset of the data by comparing the clustering output with particle-by-particle inspection and classification by the operator. This analysis yielded a false classification probability between 13 and 48 %. Additionally, particle types were identified by specific marker ions. The results from the ambient aerosol analysis show that 63 % of the analysed particles belong to clusters having a diurnal variation, suggesting that local or regional sources dominate the aerosol, especially for particles containing soot and biomass burning particles. In the cloud residues, the relative percentage of large soot-containing particles and particles containing amines was found to be increased compared to the out-of-cloud aerosol, while, in general, organic particles were less abundant in the cloud residues. In the case of amines, this can be explained by the high solubility of the amines, while the large soot-containing particles were found to be internally mixed with inorganics, which explains their activation as cloud condensation nuclei. Furthermore, the results show that during cloud processing, both sulfate and nitrate are added to the residual particles, thereby changing the mixing state and increasing the fraction of particles with nitrate and/or sulfate. This is expected to lead to higher hygroscopicity after cloud evaporation, and therefore to an increase of the particles' ability to act as cloud condensation nuclei after their cloud passage.

  4. Infrared cloud imaging in support of Earth-space optical communication.

    PubMed

    Nugent, Paul W; Shaw, Joseph A; Piazzolla, Sabino

    2009-05-11

    The increasing need for high data return from near-Earth and deep-space missions is driving a demand for the establishment of Earth-space optical communication links. These links will require a nearly obstruction-free path to the communication platform, so there is a need to measure spatial and temporal statistics of clouds at potential ground-station sites. A technique is described that uses a ground-based thermal infrared imager to provide continuous day-night cloud detection and classification according to the cloud optical depth and potential communication channel attenuation. The benefit of retrieving cloud optical depth and corresponding attenuation is illustrated through measurements that identify cloudy times when optical communication may still be possible through thin clouds.

  5. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    NASA Astrophysics Data System (ADS)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  6. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar energy in the atmosphere after this time. The results show the potential of these instruments to determine cloud base heights on prolonged time intervals. The continuous operation of these instruments is implemented to gather seasonal variation of cloud base heights in this part of the world and to add to the much-needed dataset for future climate studies in Manila Observatory.

  7. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.

  8. Ground-based microwave radar and optical lidar signatures of volcanic ash plumes: models, observations and retrievals

    NASA Astrophysics Data System (ADS)

    Mereu, Luigi; Marzano, Frank; Mori, Saverio; Montopoli, Mario; Cimini, Domenico; Martucci, Giovanni

    2013-04-01

    The detection and quantitative retrieval of volcanic ash clouds is of significant interest due to its environmental, climatic and socio-economic effects. Real-time monitoring of such phenomena is crucial, also for the initialization of dispersion models. Satellite visible-infrared radiometric observations from geostationary platforms are usually exploited for long-range trajectory tracking and for measuring low level eruptions. Their imagery is available every 15-30 minutes and suffers from a relatively poor spatial resolution. Moreover, the field-of-view of geostationary radiometric measurements may be blocked by water and ice clouds at higher levels and their overall utility is reduced at night. Ground-based microwave radars may represent an important tool to detect and, to a certain extent, mitigate the hazard from the ash clouds. Ground-based weather radar systems can provide data for determining the ash volume, total mass and height of eruption clouds. Methodological studies have recently investigated the possibility of using ground-based single-polarization and dual-polarization radar system for the remote sensing of volcanic ash cloud. A microphysical characterization of volcanic ash was carried out in terms of dielectric properties, size distribution and terminal fall speed, assuming spherically-shaped particles. A prototype of volcanic ash radar retrieval (VARR) algorithm for single-polarization systems was proposed and applied to S-band and C-band weather radar data. The sensitivity of the ground-based radar measurements decreases as the ash cloud is farther so that for distances greater than about 50 kilometers fine ash might be not detected anymore by microwave radars. In this respect, radar observations can be complementary to satellite, lidar and aircraft observations. Active remote sensing retrieval from ground, in terms of detection, estimation and sensitivity, of volcanic ash plumes is not only dependent on the sensor specifications, but also on the range and ash cloud distribution. The minimum detectable signal can be increased, for a given system and ash plume scenario, by decreasing the observation range and increasing the operational frequency using a multi-sensor approach, but also exploiting possible polarimetric capabilities. In particular, multi-wavelengths lidars can be complementary systems useful to integrate radar-based ash particle measurement. This work, starting from the results of a previous study and from above mentioned issues, is aimed at quantitatively assessing the optimal choices for microwave and millimeter-wave radar systems with a dual-polarization capability for real-time ash cloud remote sensing to be used in combination with an optical lidar. The physical-electromagnetic model of ash particle distributions is systematically reviewed and extended to include non-spherical particle shapes, vesicular composition, silicate content and orientation phenomena. The radar and lidar scattering and absorption response is simulated and analyzed in terms of self-consistent polarimetric signatures for ash classification purposes and correlation with ash concentration and mean diameter for quantitative retrieval aims. A sensitivity analysis to ash concentration, as a function of sensor specifications, range and ash category, is carried out trying to assess the expected multi-sensor multi-spectral system performances and limitations. The multi-sensor multi-wavelength polarimetric model-based approach can be used within a particle classification and estimation scheme, based on the VARR Bayesian metrics. As an application, the ground-based observation of the Eyjafjallajökull volcanic ash plume on 15-16 May 2010, carried out at the Atmospheric Research Station at Mace Head, Carna (Ireland) with MIRA36 35-GHz Ka-Band Doppler cloud radar and CHM15K lidar/ceilometer at 1064-nm wavelength, has been considered. Results are discussed in terms of retrievals and intercomparison with other ground-based and satellite-based sensors.

  9. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  10. Evolving land cover classification algorithms for multispectral and multitemporal imagery

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.; Theiler, James P.; Bloch, Jeffrey J.; Harvey, Neal R.; Perkins, Simon J.; Szymanski, John J.; Young, Aaron C.

    2002-01-01

    The Cerro Grande/Los Alamos forest fire devastated over 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos and the adjoining Los Alamos National Laboratory. The need to measure the continuing impact of the fire on the local environment has led to the application of a number of remote sensing technologies. During and after the fire, remote-sensing data was acquired from a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique to the automated classification of land cover using multi-spectral and multi-temporal imagery. We apply a hybrid genetic programming/supervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery from the Landsat 7 ETM+ instrument from before, during, and after the wildfire. Using an existing land cover classification based on a 1992 Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, and an algorithm to mask out clouds and cloud shadows. We report preliminary results of combining individual classification results using a K-means clustering approach. The details of our evolved classification are compared to the manually produced land-cover classification.

  11. Determine precipitation rates from visible and infrared satellite images of clouds by pattern recognition technique. Progress Report, 1 Jul. 1985 - 31 Mar. 1987 Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Weinman, James A.; Garan, Louis

    1987-01-01

    A more advanced cloud pattern analysis algorithm was subsequently developed to take the shape and brightness of the various clouds into account in a manner that is more consistent with the human analyst's perception of GOES cloud imagery. The results of that classification scheme were compared with precipitation probabilities observed from ships of opportunity off the U.S. east coast to derive empirical regressions between cloud types and precipitation probability. The cloud morphology was then quantitatively and objectively used to map precipitation probabilities during two winter months during which severe cold air outbreaks were observed over the northwest Atlantic. Precipitation probabilities associated with various cloud types are summarized. Maps of precipitation probability derived from the cloud morphology analysis program for two months and the precipitation probability derived from thirty years of ship observation were observed.

  12. Classification of Small Negative Lightning Reports at the KSC-ER

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Cummins, Kenneth L.; Krider, Philip

    2008-01-01

    The NASA Kennedy Space Center (KSC) and Air Force Eastern Range (ER) operate an extensive suite of lightning sensors because Florida experiences the highest area density of ground strikes in the United States, with area densities approaching 16 fl/sq km/yr when accumulated in 10x10 km (100 sq km) grids. The KSC-ER use data derived from two cloud-to-ground (CG) lightning detection networks, the "Cloud-to-Ground Lightning Surveillance System" (CGLSS) and the U.S. National Lightning Detection Network (TradeMark) (NLDN) plus a 3-dimensional lightning mapping system, the Lightning Detection and Ranging (LDAR) system, to provide warnings for ground operations and to insure mission safety during space launches. For operational applications at the KSC-ER it is important to understand the performance of each lightning detection system in considerable detail. In this work we examine a specific subset of the CGLSS stroke reports that have low values of the negative inferred peak current, Ip, i.e. values between 0 and -7 kA, and were thought to produce a new ground contact (NGC). When possible, the NLDN and LDAR systems were used to validate the CGLSS classification and to determine how many of these reported strokes were first strokes, subsequent strokes in a pre-existing channel (PEC), or cloud pulses that the CGLSS misclassified as CG strokes. It is scientifically important to determine the smallest current that can reach the ground either in the form of a first stroke or by way of a subsequent stroke that creates a new ground contact. In Biagi et al (2007), 52 low amplitude, negative return strokes ([Ip] < or = 10 kA) were evaluated in southern Arizona, northern Texas, and southern Oklahoma. The authors found that 50-87% of the small NLDN reports could be classified as CG (either first or subsequent strokes) on the basis of video and waveform recordings. Low amplitude return strokes are interesting because they are usually difficult to detect, and they are thought to bypass conventional lightning protection that relies on a sufficient attractive radius to prevent "shielding failure" (Golde, 1977). They also have larger location errors compared to the larger current events. In this study, we use the estimated peak current provided by the CGLSS and the results of our classification to determine the minimum Ip for each category of CG stroke and its probability of occurrence. Where possible, these results are compared to the findings in the literature.

  13. The Dust Cloud TGU H1192 (LDN 1525) in Auriga. II

    NASA Astrophysics Data System (ADS)

    Boyle, Richard P.; Janusz, Robert; Straizys, Vytautas; Zdanavicius, Kazimieras; Maskoliunas, Marius; Kazlauskas, Algirdas

    2016-01-01

    The results of a new investigation of interstellar extinction in the direction of the emission nebulae Sh2-231 and Sh2-235 are presented. The investigation is based on CCD photometry and photometric MK classification in seven areas of 12' by 12' size in the Vilnius seven-color photometric system down to V = 19 mag. Additionally, for the same task we applied 519 red clump giants identified in the surrounding 1.5 deg. by 1.5 deg. area using the results of photometry in the 2MASS and WISE surveys. The dependence of the extinction run with distance allows determining distances to dust clouds and their extinctions. We comparethese new more detailed results with the preliminary results described in our previous paper (V. Straizys et al. 2010, Baltic Astronomy, 19, 169) and the AAS communication at the AAS Meeting No. 219 (Austin), 349.12. The relation of the TGU H1192 dust cloud with the Auriga OB1 association is discussed.

  14. CIMIDx: Prototype for a Cloud-Based System to Support Intelligent Medical Image Diagnosis With Efficiency

    PubMed Central

    2015-01-01

    Background The Internet has greatly enhanced health care, helping patients stay up-to-date on medical issues and general knowledge. Many cancer patients use the Internet for cancer diagnosis and related information. Recently, cloud computing has emerged as a new way of delivering health services but currently, there is no generic and fully automated cloud-based self-management intervention for breast cancer patients, as practical guidelines are lacking. Objective We investigated the prevalence and predictors of cloud use for medical diagnosis among women with breast cancer to gain insight into meaningful usage parameters to evaluate the use of generic, fully automated cloud-based self-intervention, by assessing how breast cancer survivors use a generic self-management model. The goal of this study was implemented and evaluated with a new prototype called “CIMIDx”, based on representative association rules that support the diagnosis of medical images (mammograms). Methods The proposed Cloud-Based System Support Intelligent Medical Image Diagnosis (CIMIDx) prototype includes two modules. The first is the design and development of the CIMIDx training and test cloud services. Deployed in the cloud, the prototype can be used for diagnosis and screening mammography by assessing the cancers detected, tumor sizes, histology, and stage of classification accuracy. To analyze the prototype’s classification accuracy, we conducted an experiment with data provided by clients. Second, by monitoring cloud server requests, the CIMIDx usage statistics were recorded for the cloud-based self-intervention groups. We conducted an evaluation of the CIMIDx cloud service usage, in which browsing functionalities were evaluated from the end-user’s perspective. Results We performed several experiments to validate the CIMIDx prototype for breast health issues. The first set of experiments evaluated the diagnostic performance of the CIMIDx framework. We collected medical information from 150 breast cancer survivors from hospitals and health centers. The CIMIDx prototype achieved high sensitivity of up to 99.29%, and accuracy of up to 98%. The second set of experiments evaluated CIMIDx use for breast health issues, using t tests and Pearson chi-square tests to assess differences, and binary logistic regression to estimate the odds ratio (OR) for the predictors’ use of CIMIDx. For the prototype usage statistics for the same 150 breast cancer survivors, we interviewed 114 (76.0%), through self-report questionnaires from CIMIDx blogs. The frequency of log-ins/person ranged from 0 to 30, total duration/person from 0 to 1500 minutes (25 hours). The 114 participants continued logging in to all phases, resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). The overall performance of the prototype for the good category, reported usefulness of the prototype (P=.77), overall satisfaction of the prototype (P=.31), ease of navigation (P=.89), user friendliness evaluation (P=.31), and overall satisfaction (P=.31). Positive evaluations given by 100 participants via a Web-based questionnaire supported our hypothesis. Conclusions The present study shows that women felt favorably about the use of a generic fully automated cloud-based self- management prototype. The study also demonstrated that the CIMIDx prototype resulted in the detection of more cancers in screening and diagnosing patients, with an increased accuracy rate. PMID:25830608

  15. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    DTIC Science & Technology

    2011-08-01

    5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http

  16. Use of Probability Distribution Functions for Discriminating Between Cloud and Aerosol in Lidar Backscatter Data

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Vaughan, Mark A.; Winker, Davd M.; Hostetler, Chris A.; Poole, Lamont R.; Hlavka, Dennis; Hart, William; McGill, Mathew

    2004-01-01

    In this paper we describe the algorithm hat will be used during the upcoming Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission for discriminating between clouds and aerosols detected in two wavelength backscatter lidar profiles. We first analyze single-test and multiple-test classification approaches based on one-dimensional and multiple-dimensional probability density functions (PDFs) in the context of a two-class feature identification scheme. From these studies we derive an operational algorithm based on a set of 3-dimensional probability distribution functions characteristic of clouds and aerosols. A dataset acquired by the Cloud Physics Lidar (CPL) is used to test the algorithm. Comparisons are conducted between the CALIPSO algorithm results and the CPL data product. The results obtained show generally good agreement between the two methods. However, of a total of 228,264 layers analyzed, approximately 5.7% are classified as different types by the CALIPSO and CPL algorithm. This disparity is shown to be due largely to the misclassification of clouds as aerosols by the CPL algorithm. The use of 3-dimensional PDFs in the CALIPSO algorithm is found to significantly reduce this type of error. Dust presents a special case. Because the intrinsic scattering properties of dust layers can be very similar to those of clouds, additional algorithm testing was performed using an optically dense layer of Saharan dust measured during the Lidar In-space Technology Experiment (LITE). In general, the method is shown to distinguish reliably between dust layers and clouds. The relatively few erroneous classifications occurred most often in the LITE data, in those regions of the Saharan dust layer where the optical thickness was the highest.

  17. Cloud and aerosol occurrences in the UTLS region across Pakistan during summer monsoon seasons using CALIPSO and CloudSat observations

    NASA Astrophysics Data System (ADS)

    Chishtie, Farrukh

    2016-04-01

    As part of the A-train NASA constellation, Coudsat and CALIPSO provide an unprecedented vertical observation of clouds and aerosols. Using observational data from both of these satellites, we conduct a multi-year analysis from 2006-2014, of the UTLS (Upper Troposphere and the Lower Stratosphere) region. We map out cloud and aerosol occurrences in this region across Pakistan, specifically around the summer monsoon season. Over the past five years, Pakistan has faced tremendous challenges due to massive flooding as well as earlier brief monsoon seasons of low precipitation and short drought periods. Hence, this motivates the present study towards understanding the deep convective and related dynamics in this season which can possibly influence cloud and aerosol transport in the region. Further, while global studies are conducted, the goal of this study is to conduct a detailed study of cloud, aerosols and their interplay, across Pakistan. Due to a dearth of ground observations, this study provides a dedicated focus on the UTLS domain. Vertical profiling satellites in this region are deemed important as there are no ground observations being done. This is important as both the properties and dynamics of clouds and aerosols have to be studied in a wider context in order to better understand the monsoon season and its onset in this region. With the CALIPSO Vertical Feature Mask (VFM), Total Attenuated Backscatter (TAB) and Depolarization Ratio (DR) as well as the combined CloudSat's 2B-GEOPROF-LIDAR (Radar-Lidar Cloud Geometrical Profile) and 2B-CLDCLASS-LIDAR (Radar-Lidar Cloud Classification) products, we find the presence of thin cirrus clouds in the UTLS region in the periods of June-September from the 2006-2014 period. There are marked differences in day observations as compared to night in both of these satellite retrievals, with the latter period finding more occurrences of clouds in the UTLS region. Dedicated CloudSat products 2B-CLDCLASS (cloud classification) and 2C-TAU (Cloud Optical Depth) further confirm the presence of sub-visual and thin cirrus clouds in the UTLS region, during the summer monsoon season. From CALIPSO observations, there is significant presence of aerosol layers before the onset of precipitation in the troposphere. This thickness ranges from 1-4 km, with increasing thickness observed the 2009-2014 period. Implications of these findings are detailed in this presentation.

  18. Automated lidar-derived canopy height estimates for the Upper Mississippi River System

    USGS Publications Warehouse

    Hlavacek, Enrika

    2015-01-01

    Land cover/land use (LCU) classifications serve as important decision support products for researchers and land managers. The LCU classifications produced by the U.S. Geological Survey’s Upper Midwest Environmental Sciences Center (UMESC) include canopy height estimates that are assigned through manual aerial photography interpretation techniques. In an effort to improve upon these techniques, this project investigated the use of high-density lidar data for the Upper Mississippi River System to determine canopy height. An ArcGIS tool was developed to automatically derive height modifier information based on the extent of land cover features for forest classes. The measurement of canopy height included a calculation of the average height from lidar point cloud data as well as the inclusion of a local maximum filter to identify individual tree canopies. Results were compared to original manually interpreted height modifiers and to field survey data from U.S. Forest Service Forest Inventory and Analysis plots. This project demonstrated the effectiveness of utilizing lidar data to more efficiently assign height modifier attributes to LCU classifications produced by the UMESC.

  19. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  20. Droplet Size Distributions as a function of rainy system type and Cloud Condensation Nuclei concentrations

    NASA Astrophysics Data System (ADS)

    Cecchini, Micael A.; Machado, Luiz A. T.; Artaxo, Paulo

    2014-06-01

    This work aims to study typical Droplet Size Distributions (DSDs) for different types of precipitation systems and Cloud Condensation Nuclei concentrations over the Vale do Paraíba region in southeastern Brazil. Numerous instruments were deployed during the CHUVA (Cloud processes of tHe main precipitation systems in Brazil: a contribUtion to cloud resolVing modeling and to the GPM) Project in Vale do Paraíba campaign, from November 22, 2011 through January 10, 2012. Measurements of CCN (Cloud Condensation Nuclei) and total particle concentrations, along with measurements of rain DSDs and standard atmospheric properties, including temperature, pressure and wind intensity and direction, were specifically made in this study. The measured DSDs were parameterized with a gamma function using the moment method. The three gamma parameters were disposed in a 3-dimensional space, and subclasses were classified using cluster analysis. Seven DSD categories were chosen to represent the different types of DSDs. The DSD classes were useful in characterizing precipitation events both individually and as a group of systems with similar properties. The rainfall regime classification system was employed to categorize rainy events as local convective rainfall, organized convection rainfall and stratiform rainfall. Furthermore, the frequencies of the seven DSD classes were associated to each type of rainy event. The rainfall categories were also employed to evaluate the impact of the CCN concentration on the DSDs. In the stratiform rain events, the polluted cases had a statistically significant increase in the total rain droplet concentrations (TDCs) compared to cleaner events. An average concentration increase from 668 cm- 3 to 2012 cm- 3 for CCN at 1% supersaturation was found to be associated with an increase of approximately 87 m- 3 in TDC for those events. For the local convection cases, polluted events presented a 10% higher mass weighted mean diameter (Dm) on average. For the organized convection events, no significant results were found.

  1. (Un)Natural Disasters: The Electoral Cycle Outweighs the Hydrologic Cycle in Drought Declaration in Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2016-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  2. A satellite rainfall retrieval technique over northern Algeria based on the probability of rainfall intensities classification from MSG-SEVIRI

    NASA Astrophysics Data System (ADS)

    Lazri, Mourad; Ameur, Soltane

    2016-09-01

    In this paper, an algorithm based on the probability of rainfall intensities classification for rainfall estimation from Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) has been developed. The classification scheme uses various spectral parameters of SEVIRI that provide information about cloud top temperature and optical and microphysical cloud properties. The presented method is developed and trained for the north of Algeria. The calibration of the method is carried out using as a reference rain classification fields derived from radar for rainy season from November 2006 to March 2007. Rainfall rates are assigned to rain areas previously identified and classified according to the precipitation formation processes. The comparisons between satellite-derived precipitation estimates and validation data show that the developed scheme performs reasonably well. Indeed, the correlation coefficient presents a significant level (r:0.87). The values of POD, POFD and FAR are 80%, 13% and 25%, respectively. Also, for a rainfall estimation of about 614 mm, the RMSD, Bias, MAD and PD indicate 102.06(mm), 2.18(mm), 68.07(mm) and 12.58, respectively.

  3. Multi-sensor measurements of mixed-phase clouds above Greenland

    NASA Astrophysics Data System (ADS)

    Stillwell, Robert A.; Shupe, Matthew D.; Thayer, Jeffrey P.; Neely, Ryan R.; Turner, David D.

    2018-04-01

    Liquid-only and mixed-phase clouds in the Arctic strongly affect the regional surface energy and ice mass budgets, yet much remains unknown about the nature of these clouds due to the lack of intensive measurements. Lidar measurements of these clouds are challenged by very large signal dynamic range, which makes even seemingly simple tasks, such as thermodynamic phase classification, difficult. This work focuses on a set of measurements made by the Clouds Aerosol Polarization and Backscatter Lidar at Summit, Greenland and its retrieval algorithms, which use both analog and photon counting as well as orthogonal and non-orthogonal polarization retrievals to extend dynamic range and improve overall measurement quality and quantity. Presented here is an algorithm for cloud parameter retrievals that leverages enhanced dynamic range retrievals to classify mixed-phase clouds. This best guess retrieval is compared to co-located instruments for validation.

  4. Empirical and modeled synoptic cloud climatology of the Arctic Ocean

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Newell, J. P.; Schweiger, A.; Crane, R. G.

    1986-01-01

    A set of cloud cover data were developed for the Arctic during the climatically important spring/early summer transition months. Parallel with the determination of mean monthly cloud conditions, data for different synoptic pressure patterns were also composited as a means of evaluating the role of synoptic variability on Arctic cloud regimes. In order to carry out this analysis, a synoptic classification scheme was developed for the Arctic using an objective typing procedure. A second major objective was to analyze model output of pressure fields and cloud parameters from a control run of the Goddard Institue for Space Studies climate model for the same area and to intercompare the synoptic climatatology of the model with that based on the observational data.

  5. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    NASA Astrophysics Data System (ADS)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to proper wages usage. Thus a more precise and unambiguous boundaries of segments (objects) were received. As a results of the classification 5 classes of land cover (buildings, water, high and low vegetation and others) were extracted. Both pixel-based image analysis and OBIA were conducted with a minimum mapping unit of 10m2. Results were validated on the basis on manual classification and random points (80 per test area), reference data set was manually interpreted using ortophotomaps and expert knowledge of the test site areas.

  6. Crop classification and mapping based on Sentinel missions data in cloud environment

    NASA Astrophysics Data System (ADS)

    Lavreniuk, M. S.; Kussul, N.; Shelestov, A.; Vasiliev, V.

    2017-12-01

    Availability of high resolution satellite imagery (Sentinel-1/2/3, Landsat) over large territories opens new opportunities in agricultural monitoring. In particular, it becomes feasible to solve crop classification and crop mapping task at country and regional scale using time series of heterogenous satellite imagery. But in this case, we face with the problem of Big Data. Dealing with time series of high resolution (10 m) multispectral imagery we need to download huge volumes of data and then process them. The solution is to move "processing chain" closer to data itself to drastically shorten time for data transfer. One more advantage of such approach is the possibility to parallelize data processing workflow and efficiently implement machine learning algorithms. This could be done with cloud platform where Sentinel imagery are stored. In this study, we investigate usability and efficiency of two different cloud platforms Amazon and Google for crop classification and crop mapping problems. Two pilot areas were investigated - Ukraine and England. Google provides user friendly environment Google Earth Engine for Earth observation applications with a lot of data processing and machine learning tools already deployed. At the same time with Amazon one gets much more flexibility in implementation of his own workflow. Detailed analysis of pros and cons will be done in the presentation.

  7. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  8. Sources of variation in Landsat autocorrelation

    NASA Technical Reports Server (NTRS)

    Craig, R. G.; Labovitz, M. L.

    1980-01-01

    Analysis of sixty-four scan lines representing diverse conditions across satellites, channels, scanners, locations and cloud cover confirms that Landsat data are autocorrelated and consistently follow an Arima (1,0,1) pattern. The AR parameter varies significantly with location and the MA coefficient with cloud cover. Maximum likelihood classification functions are considerably in error unless this autocorrelation is compensated for in sampling.

  9. Application of Template Matching for Improving Classification of Urban Railroad Point Clouds

    PubMed Central

    Arastounia, Mostafa; Oude Elberink, Sander

    2016-01-01

    This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452

  10. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery. Caribbean Journal of Science. 44(2):175-198.

    Treesearch

    E.H. Helmer; T.A. Kennaway; D.H. Pedreros; M.L. Clark; H. Marcano-Vega; L.L. Tieszen; S.R. Schill; C.M.S. Carrington

    2008-01-01

    Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius...

  11. A Practical and Automated Approach to Large Area Forest Disturbance Mapping with Remote Sensing

    PubMed Central

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions. PMID:24717283

  12. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    PubMed

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.

  13. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2013-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill.; Figure 1 -- Architecture.

  14. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved 'clock time' speedups in fusing datasets on our own compute nodes and in the public Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed 'near' the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill. SciReduce Architecture

  15. Spectral pattern classification in lidar data for rock identification in outcrops.

    PubMed

    Campos Inocencio, Leonardo; Veronez, Mauricio Roberto; Wohnrath Tognoli, Francisco Manoel; de Souza, Marcelo Kehl; da Silva, Reginaldo Macedônio; Gonzaga, Luiz; Blum Silveira, César Leonardo

    2014-01-01

    The present study aimed to develop and implement a method for detection and classification of spectral signatures in point clouds obtained from terrestrial laser scanner in order to identify the presence of different rocks in outcrops and to generate a digital outcrop model. To achieve this objective, a software based on cluster analysis was created, named K-Clouds. This software was developed through a partnership between UNISINOS and the company V3D. This tool was designed to begin with an analysis and interpretation of a histogram from a point cloud of the outcrop and subsequently indication of a number of classes provided by the user, to process the intensity return values. This classified information can then be interpreted by geologists, to provide a better understanding and identification from the existing rocks in the outcrop. Beyond the detection of different rocks, this work was able to detect small changes in the physical-chemical characteristics of the rocks, as they were caused by weathering or compositional changes.

  16. High-latitude dust clouds LDN 183 and LDN 169: distances and extinctions

    NASA Astrophysics Data System (ADS)

    Straižys, V.; Boyle, R. P.; Zdanavičius, J.; Janusz, R.; Corbally, C. J.; Munari, U.; Andersson, B.-G.; Zdanavičius, K.; Kazlauskas, A.; Maskoliūnas, M.; Černis, K.; Macijauskas, M.

    2018-03-01

    Interstellar extinction is investigated in a 2°× 2° area containing the dust and molecular clouds LDN 183 (MBM 37) and LDN 169, which are located at RA = 15h 54m, Dec = - 3°. The study is based on a photometric classification in spectral and luminosity classes of 782 stars selected from the catalogs of 1299 stars down to V = 20 mag observed in the Vilnius seven-color system. For control, the MK types for the 18 brightest stars with V between 8.5 and 12.8 mag were determined spectroscopically. For 14 stars, located closer than 200 pc, distances were calculated from trigonometric parallaxes taken from the Gaia Data Release 1. For about 70% of the observed stars, two-dimensional spectral types, interstellar extinctions AV, and distances were determined. Using 57 stars closer than 200 pc, we estimate that the front edge of the clouds begins at 105 ± 8 pc. The extinction layer in the vicinities of the clouds can be about 20 pc thick. In the outer parts of the clouds and between the clouds, the extinction is 0.5-2.0 mag. Behind the Serpens/Libra clouds, the extinction range does not increase; this means that the dust layer at 105 pc is a single extinction source. Full Tables 1 and 2 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A9

  17. Negative Aerosol-Cloud re Relationship From Aircraft Observations Over Hebei, China

    NASA Astrophysics Data System (ADS)

    Zhao, Chuanfeng; Qiu, Yanmei; Dong, Xiaobo; Wang, Zhien; Peng, Yiran; Li, Baodong; Wu, Zhihui; Wang, Yang

    2018-01-01

    Using six flights observations in September 2015 over Hebei, China, this study shows a robust negative aerosol-cloud droplet effective radius (re) relationship for liquid clouds, which is different from previous studies that found positive aerosol-cloud re relationship over East China using satellite observations. A total of 27 cloud samples was analyzed with the classification of clean and polluted conditions using lower and upper 1/3 aerosol concentration at 200 m below the cloud bases. By normalizing the profiles of cloud droplet re, we found significant smaller values under polluted than under clean condition at most heights. Moreover, the averaged profiles of cloud liquid water content (LWC) show larger values under polluted than clean conditions, indicating even stronger negative aerosol-cloud re relationship if LWC is kept constant. The droplet size distributions further demonstrate that more droplets concentrate within smaller size ranges under polluted conditions. Quantitatively, the aerosol-cloud interaction is found around 0.10-0.19 for the study region.

  18. Proceedings of the Cloud Impacts on DoD Operations and Systems 1993 Conference (CIDOS - 93) Held in Fort Belvoir, Virginia on 16-19 November 1993

    DTIC Science & Technology

    1994-07-01

    lwir imagery (preliminary calibration) and local lapse rates. Type maps were developed using a supervised multi-spectral classification procedure., 2.5...Atmospherics Conference, R. Lee, chairman, 251-260. 4. Tofsted, D. H., 1993, "Effects of Nonuniform Aerosol Forward Scattering on Imagery," Proceedings of...than channel 4; 4) the channel 4 brightness temperature is high relative to the predicted clear scene temperature; and 5) LWIR channel difference is

  19. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  20. An Objective Classification of Saturn Cloud Features from Cassini ISS Images

    NASA Technical Reports Server (NTRS)

    Del Genio, Anthony D.; Barbara, John M.

    2016-01-01

    A k -means clustering algorithm is applied to Cassini Imaging Science Subsystem continuum and methane band images of Saturn's northern hemisphere to objectively classify regional albedo features and aid in their dynamical interpretation. The procedure is based on a technique applied previously to visible- infrared images of Earth. It provides a new perspective on giant planet cloud morphology and its relationship to the dynamics and a meteorological context for the analysis of other types of simultaneous Saturn observations. The method identifies 6 clusters that exhibit distinct morphology, vertical structure, and preferred latitudes of occurrence. These correspond to areas dominated by deep convective cells; low contrast areas, some including thinner and thicker clouds possibly associated with baroclinic instability; regions with possible isolated thin cirrus clouds; darker areas due to thinner low level clouds or clearer skies due to downwelling, or due to absorbing particles; and fields of relatively shallow cumulus clouds. The spatial associations among these cloud types suggest that dynamically, there are three distinct types of latitude bands on Saturn: deep convectively disturbed latitudes in cyclonic shear regions poleward of the eastward jets; convectively suppressed regions near and surrounding the westward jets; and baro-clinically unstable latitudes near eastward jet cores and in the anti-cyclonic regions equatorward of them. These are roughly analogous to some of the features of Earth's tropics, subtropics, and midlatitudes, respectively. This classification may be more useful for dynamics purposes than the traditional belt-zone partitioning. Temporal variations of feature contrast and cluster occurrence suggest that the upper tropospheric haze in the northern hemisphere may have thickened by 2014. The results suggest that routine use of clustering may be a worthwhile complement to many different types of planetary atmospheric data analysis.

  1. Feature Selection for Classification of Polar Regions Using a Fuzzy Expert System

    NASA Technical Reports Server (NTRS)

    Penaloza, Mauel A.; Welch, Ronald M.

    1996-01-01

    Labeling, feature selection, and the choice of classifier are critical elements for classification of scenes and for image understanding. This study examines several methods for feature selection in polar regions, including the list, of a fuzzy logic-based expert system for further refinement of a set of selected features. Six Advanced Very High Resolution Radiometer (AVHRR) Local Area Coverage (LAC) arctic scenes are classified into nine classes: water, snow / ice, ice cloud, land, thin stratus, stratus over water, cumulus over water, textured snow over water, and snow-covered mountains. Sixty-seven spectral and textural features are computed and analyzed by the feature selection algorithms. The divergence, histogram analysis, and discriminant analysis approaches are intercompared for their effectiveness in feature selection. The fuzzy expert system method is used not only to determine the effectiveness of each approach in classifying polar scenes, but also to further reduce the features into a more optimal set. For each selection method,features are ranked from best to worst, and the best half of the features are selected. Then, rules using these selected features are defined. The results of running the fuzzy expert system with these rules show that the divergence method produces the best set features, not only does it produce the highest classification accuracy, but also it has the lowest computation requirements. A reduction of the set of features produced by the divergence method using the fuzzy expert system results in an overall classification accuracy of over 95 %. However, this increase of accuracy has a high computation cost.

  2. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  3. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  4. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  5. Rabi cropped area forecasting of parts of Banaskatha District,Gujarat using MRS RISAT-1 SAR data

    NASA Astrophysics Data System (ADS)

    Parekh, R. A.; Mehta, R. L.; Vyas, A.

    2016-10-01

    Radar sensors can be used for large-scale vegetation mapping and monitoring using backscatter coefficients in different polarisations and wavelength bands. Due to cloud and haze interference, optical images are not always available at all phonological stages important for crop discrimination. Moreover, in cloud prone areas, exclusively SAR approach would provide operational solution. This paper presents the results of classifying the cropped and non cropped areas using multi-temporal SAR images. Dual polarised C- band RISAT MRS (Medium Resolution ScanSAR mode) data were acquired on 9thDec. 2012, 28thJan. 2013 and 22nd Feb. 2013 at 18m spatial resolution. Intensity images of two polarisations (HH, HV) were extracted and converted into backscattering coefficient images. Cross polarisation ratio (CPR) images and Radar fractional vegetation density index (RFDI) were created from the temporal data and integrated with the multi-temporal images. Signatures of cropped and un-cropped areas were used for maximum likelihood supervised classification. Separability in cropped and umcropped classes using different polarisation combinations and classification accuracy analysis was carried out. FCC (False Color Composite) prepared using best three SAR polarisations in the data set was compared with LISS-III (Linear Imaging Self-Scanning System-III) image. The acreage under rabi crops was estimated. The methodology developed was for rabi cropped area, due to availability of SAR data of rabi season. Though, the approach is more relevant for acreage estimation of kharif crops when frequent cloud cover condition prevails during monsoon season and optical sensors fail to deliver good quality images.

  6. Applications of ISES for meteorology

    NASA Technical Reports Server (NTRS)

    Try, Paul D.

    1990-01-01

    The results are summarized from an initial assessment of the potential real-time meteorological requirements for the data from Eos systems. Eos research scientists associated with facility instruments, investigator instruments, and interdisciplinary groups with data related to meteorological support were contacted, along with those from the normal operational user and technique development groups. Two types of activities indicated the greatest need for real-time Eos data: technology transfer groups (e.g., NOAA's Forecasting System Laboratory and the DOD development laboratories), and field testing groups with airborne operations. A special concern was expressed by several non-U.S. participants who desire a direct downlink to be sure of rapid receipt of the data for their area of interest. Several potential experiments or demonstrations are recommended for ISES which include support for hurricane/typhoon forecasting, space shuttle reentry, severe weather forecasting (using microphysical cloud classification techniques), field testing, and quick reaction of instrumented aircraft to measure such events as polar stratospheric clouds and volcanic eruptions.

  7. The application of satellite data in the determination of ocean temperatures and cloud characteristics and statistics

    NASA Technical Reports Server (NTRS)

    Curran, R. J.; Salomonson, V. V.; Shenk, W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The major shortcoming of the data was the loss of the infrared radiances from the S191 spectrometer. The cloud thermodynamic phase determination procedure was derived and tested with the data collected by the S192 multispectral scanner. Results of the test indicate a large fraction of the data could be classified thermodynamically. An added bonus was the inclusion of snow in the classification approach. The conclusion to be drawn from this portion of the effort is that in most cases considered ice clouds, liquid water droplet clouds, and snow fields can be spectroscopically separated to a high degree of accuracy.

  8. Feasibility study of a zero-gravity (orbital) atmospheric cloud physics experiments laboratory

    NASA Technical Reports Server (NTRS)

    Hollinden, A. B.; Eaton, L. R.

    1972-01-01

    A feasibility and concepts study for a zero-gravity (orbital) atmospheric cloud physics experiment laboratory is discussed. The primary objective was to define a set of cloud physics experiments which will benefit from the near zero-gravity environment of an orbiting spacecraft, identify merits of this environment relative to those of groundbased laboratory facilities, and identify conceptual approaches for the accomplishment of the experiments in an orbiting spacecraft. Solicitation, classification and review of cloud physics experiments for which the advantages of a near zero-gravity environment are evident are described. Identification of experiments for potential early flight opportunities is provided. Several significant accomplishments achieved during the course of this study are presented.

  9. TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis

    PubMed Central

    Frazier, Zachary; Xu, Min; Alber, Frank

    2017-01-01

    SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576

  10. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept and prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be perform

  11. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  12. Evaluation of Decision Trees for Cloud Detection from AVHRR Data

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Nemani, Ramakrishna

    2005-01-01

    Automated cloud detection and tracking is an important step in assessing changes in radiation budgets associated with global climate change via remote sensing. Data products based on satellite imagery are available to the scientific community for studying trends in the Earth's atmosphere. The data products include pixel-based cloud masks that assign cloud-cover classifications to pixels. Many cloud-mask algorithms have the form of decision trees. The decision trees employ sequential tests that scientists designed based on empirical astrophysics studies and simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In a previous study we compared automatically learned decision trees to cloud masks included in Advanced Very High Resolution Radiometer (AVHRR) data products from the year 2000. In this paper we report the replication of the study for five-year data, and for a gold standard based on surface observations performed by scientists at weather stations in the British Islands. For our sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks p < 0.001.

  13. Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems.

    PubMed

    Oh, Sang-Il; Kang, Hang-Bong

    2017-01-22

    To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.

  14. Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems

    PubMed Central

    Oh, Sang-Il; Kang, Hang-Bong

    2017-01-01

    To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. PMID:28117742

  15. A combined spectral and object-based approach to transparent cloud removal in an operational setting for Landsat ETM+

    NASA Astrophysics Data System (ADS)

    Watmough, Gary R.; Atkinson, Peter M.; Hutton, Craig W.

    2011-04-01

    The automated cloud cover assessment (ACCA) algorithm has provided automated estimates of cloud cover for the Landsat ETM+ mission since 2001. However, due to the lack of a band around 1.375 μm, cloud edges and transparent clouds such as cirrus cannot be detected. Use of Landsat ETM+ imagery for terrestrial land analysis is further hampered by the relatively long revisit period due to a nadir only viewing sensor. In this study, the ACCA threshold parameters were altered to minimise omission errors in the cloud masks. Object-based analysis was used to reduce the commission errors from the extended cloud filters. The method resulted in the removal of optically thin cirrus cloud and cloud edges which are often missed by other methods in sub-tropical areas. Although not fully automated, the principles of the method developed here provide an opportunity for using otherwise sub-optimal or completely unusable Landsat ETM+ imagery for operational applications. Where specific images are required for particular research goals the method can be used to remove cloud and transparent cloud helping to reduce bias in subsequent land cover classifications.

  16. Patient identification using a near-infrared laser scanner

    NASA Astrophysics Data System (ADS)

    Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris

    2017-03-01

    We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.

  17. Social Sensors (S2ensors): A Kind of Hardware-Software-Integrated Mediators for Social Manufacturing Systems Under Mass Individualization

    NASA Astrophysics Data System (ADS)

    Ding, Kai; Jiang, Ping-Yu

    2017-09-01

    Currently, little work has been devoted to the mediators and tools for multi-role production interactions in the mass individualization environment. This paper proposes a kind of hardware-software-integrated mediators called social sensors (S2ensors) to facilitate the production interactions among customers, manufacturers, and other stakeholders in the social manufacturing systems (SMS). The concept, classification, operational logics, and formalization of S2ensors are clarified. S2ensors collect subjective data from physical sensors and objective data from sensory input in mobile Apps, merge them into meaningful information for decision-making, and finally feed the decisions back for reaction and execution. Then, an S2ensors-Cloud platform is discussed to integrate different S2ensors to work for SMSs in an autonomous way. A demonstrative case is studied by developing a prototype system and the results show that S2ensors and S2ensors-Cloud platform can assist multi-role stakeholders interact and collaborate for the production tasks. It reveals the mediator-enabled mechanisms and methods for production interactions among stakeholders in SMS.

  18. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    PubMed Central

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  19. Looking at Earth from Space: Teacher's Guide with Activities for Earth and Space Science

    NASA Technical Reports Server (NTRS)

    Steele, Colleen (Editor); Steele, Colleen; Ryan, William F.

    1995-01-01

    The Maryland Pilot Earth Science and Technology Education Network (MAPS-NET) project was sponsored by the National Aeronautics and Space Administration (NASA) to enrich teacher preparation and classroom learning in the area of Earth system science. This publication includes a teacher's guide that replicates material taught during a graduate-level course of the project and activities developed by the teachers. The publication was developed to provide teachers with a comprehensive approach to using satellite imagery to enhance science education. The teacher's guide is divided into topical chapters and enables teachers to expand their knowledge of the atmosphere, common weather patterns, and remote sensing. Topics include: weather systems and satellite imagery including mid-latitude weather systems; wave motion and the general circulation; cyclonic disturbances and baroclinic instability; clouds; additional common weather patterns; satellite images and the internet; environmental satellites; orbits; and ground station set-up. Activities are listed by suggested grade level and include the following topics: using weather symbols; forecasting the weather; cloud families and identification; classification of cloud types through infrared Automatic Picture Transmission (APT) imagery; comparison of visible and infrared imagery; cold fronts; to ski or not to ski (imagery as a decision making tool), infrared and visible satellite images; thunderstorms; looping satellite images; hurricanes; intertropical convergence zone; and using weather satellite images to enhance a study of the Chesapeake Bay. A list of resources is also included.

  20. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.

  1. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Hua, H.

    2012-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.

  2. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring

    NASA Astrophysics Data System (ADS)

    Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank

    2018-04-01

    Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.

  3. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  4. Microphysical Cloud Regimes used as a tool to study Aerosol-Cloud-Precipitation-Radiation interactions

    NASA Astrophysics Data System (ADS)

    Cho, N.; Oreopoulos, L.; Lee, D.

    2017-12-01

    The presentation will examine whether the diagnostic relationships between aerosol and cloud-affected quantities (precipitation, radiation) obtained from sparse temporal resolution measurements from polar orbiting satellites can potentially demonstrate actual aerosol effects on clouds with appropriate analysis. The analysis relies exclusively on Level-3 (gridded) data and comprises systematic cloud classification in terms of "microphysical cloud regimes" (µCRs), aerosol optical depth (AOD) variations relative to a region's local seasonal climatology, and exploitation of the 3-hour difference between Terra (morning) and Aqua (afternoon) overpasses. Specifically, our presentation will assess whether Aerosol-Cloud-Precipitation-Radiation interactions (ACPRI) can be diagnosed by investigating: (a) The variations with AOD of afternoon cloud-affected quantities composited by afternoon or morning µCRs; (b) µCR transition diagrams composited by morning AOD quartiles; (c) whether clouds represented by ensemble cloud effective radius - cloud optical thickness joint histograms look distinct under low and high AOD conditions when preceded or followed by specific µCRs. We will explain how our approach addresses long-standing themes of the ACPRI problem such as the optimal ways to decompose the problem by cloud class, the prevalence and detectability of 1st/2nd aerosol indirect effects and invigoration, and the effectiveness of aerosol changes in inducing cloud modification at different segments of the AOD distribution.

  5. Cloud Classification in Polar and Desert Regions and Smoke Classification from Biomass Burning Using a Hierarchical Neural Network

    NASA Technical Reports Server (NTRS)

    Alexander, June; Corwin, Edward; Lloyd, David; Logar, Antonette; Welch, Ronald

    1996-01-01

    This research focuses on a new neural network scene classification technique. The task is to identify scene elements in Advanced Very High Resolution Radiometry (AVHRR) data from three scene types: polar, desert and smoke from biomass burning in South America (smoke). The ultimate goal of this research is to design and implement a computer system which will identify the clouds present on a whole-Earth satellite view as a means of tracking global climate changes. Previous research has reported results for rule-based systems (Tovinkere et at 1992, 1993) for standard back propagation (Watters et at. 1993) and for a hierarchical approach (Corwin et al 1994) for polar data. This research uses a hierarchical neural network with don't care conditions and applies this technique to complex scenes. A hierarchical neural network consists of a switching network and a collection of leaf networks. The idea of the hierarchical neural network is that it is a simpler task to classify a certain pattern from a subset of patterns than it is to classify a pattern from the entire set. Therefore, the first task is to cluster the classes into groups. The switching, or decision network, performs an initial classification by selecting a leaf network. The leaf networks contain a reduced set of similar classes, and it is in the various leaf networks that the actual classification takes place. The grouping of classes in the various leaf networks is determined by applying an iterative clustering algorithm. Several clustering algorithms were investigated, but due to the size of the data sets, the exhaustive search algorithms were eliminated. A heuristic approach using a confusion matrix from a lightly trained neural network provided the basis for the clustering algorithm. Once the clusters have been identified, the hierarchical network can be trained. The approach of using don't care nodes results from the difficulty in generating extremely complex surfaces in order to separate one class from all of the others. This approach finds pairwise separating surfaces and forms the more complex separating surface from combinations of simpler surfaces. This technique both reduces training time and improves accuracy over the previously reported results. Accuracies of 97.47%, 95.70%, and 99.05% were achieved for the polar, desert and smoke data sets.

  6. Classification of clouds sampled at the puy de Dôme (France) based on 10 yr of monitoring of their physicochemical properties

    NASA Astrophysics Data System (ADS)

    Deguillaume, L.; Charbouillot, T.; Joly, M.; Vaïtilingom, M.; Parazols, M.; Marinoni, A.; Amato, P.; Delort, A.-M.; Vinatier, V.; Flossmann, A.; Chaumerliac, N.; Pichon, J. M.; Houdier, S.; Laj, P.; Sellegri, K.; Colomb, A.; Brigante, M.; Mailhot, G.

    2014-02-01

    Long-term monitoring of the chemical composition of clouds (73 cloud events representing 199 individual samples) sampled at the puy de Dôme (pdD) station (France) was performed between 2001 and 2011. Physicochemical parameters, as well as the concentrations of the major organic and inorganic constituents, were measured and analyzed by multicomponent statistical analysis. Along with the corresponding back-trajectory plots, this allowed for distinguishing four different categories of air masses reaching the summit of the pdD: polluted, continental, marine and highly marine. The statistical analysis led to the determination of criteria (concentrations of inorganic compounds, pH) that differentiate each category of air masses. Highly marine clouds exhibited high concentrations of Na+ and Cl-; the marine category presented lower concentration of ions but more elevated pH. Finally, the two remaining clusters were classified as "continental" and "polluted"; these clusters had the second-highest and highest levels of NH4+, NO3-, and SO24-, respectively. This unique data set of cloud chemical composition is then discussed as a function of this classification. Total organic carbon (TOC) is significantly higher in polluted air masses than in the other categories, which suggests additional anthropogenic sources. Concentrations of carboxylic acids and carbonyls represent around 10% of the organic matter in all categories of air masses and are studied for their relative importance. Iron concentrations are significantly higher for polluted air masses and iron is mainly present in its oxidation state (+II) in all categories of air masses. Finally, H2O2 concentrations are much more varied in marine and highly marine clouds than in polluted clouds, which are characterized by the lowest average concentration of H2O2. This data set provides concentration ranges of main inorganic and organic compounds for modeling purposes on multiphase cloud chemistry.

  7. Application of Mls Data to the Assessment of Safety-Related Features in the Surrounding Area of Automatically Detected Pedestrian Crossings

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Sánchez-Rodríguez, A.; González-deSantos, L. M.

    2018-05-01

    During the last few years, there has been a huge methodological development regarding the automatic processing of 3D point cloud data acquired by both terrestrial and aerial mobile mapping systems, motivated by the improvement of surveying technologies and hardware performance. This paper presents a methodology that, in a first place, extracts geometric and semantic information regarding the road markings within the surveyed area from Mobile Laser Scanning (MLS) data, and then employs it to isolate street areas where pedestrian crossings are found and, therefore, pedestrians are more likely to cross the road. Then, different safety-related features can be extracted in order to offer information about the adequacy of the pedestrian crossing regarding its safety, which can be displayed in a Geographical Information System (GIS) layer. These features are defined in four different processing modules: Accessibility analysis, traffic lights classification, traffic signs classification, and visibility analysis. The validation of the proposed methodology has been carried out in two different cities in the northwest of Spain, obtaining both quantitative and qualitative results for pedestrian crossing classification and for each processing module of the safety assessment on pedestrian crossing environments.

  8. Utilizing Android and the Cloud Computing Environment to Increase Situational Awareness for a Mobile Distributed Response

    DTIC Science & Technology

    2012-03-01

    by using a common communication technology there is no need to develop a complicated communications plan and generate an ad - hoc communications...DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Maintaining an accurate Common Operational Picture (COP) is a strategic requirement for...TERMS Android Programming, Cloud Computing, Common Operating Picture, Web Programing 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT

  9. Classifying stages of cirrus life-cycle evolution

    NASA Astrophysics Data System (ADS)

    Urbanek, Benedikt; Groß, Silke; Schäfler, Andreas; Wirth, Martin

    2018-04-01

    Airborne lidar backscatter data is used to determine in- and out-of-cloud regions. Lidar measurements of water vapor together with model temperature fields are used to calculate relative humidity over ice (RHi). Based on temperature and RHi we identify different stages of cirrus evolution: homogeneous and heterogeneous freezing, depositional growth, ice sublimation and sedimentation. We will present our classification scheme and first applications on mid-latitude cirrus clouds.

  10. Cloud Radiative Effect in dependence on Cloud Type

    NASA Astrophysics Data System (ADS)

    Aebi, Christine; Gröbner, Julian; Kämpfer, Niklaus; Vuilleumier, Laurent

    2015-04-01

    Radiative transfer of energy in the atmosphere and the influence of clouds on the radiation budget remain the greatest sources of uncertainty in the simulation of climate change. Small changes in cloudiness and radiation can have large impacts on the Earth's climate. In order to assess the opposing effects of clouds on the radiation budget and the corresponding changes, frequent and more precise radiation and cloud observations are necessary. The role of clouds on the surface radiation budget is studied in order to quantify the longwave, shortwave and the total cloud radiative forcing in dependence on the atmospheric composition and cloud type. The study is performed for three different sites in Switzerland at three different altitude levels: Payerne (490 m asl), Davos (1'560 m asl) and Jungfraujoch (3'580 m asl). On the basis of data of visible all-sky camera systems at the three aforementioned stations in Switzerland, up to six different cloud types are distinguished (Cirrus-Cirrostratus, Cirrocumulus-Altocumulus, Stratus-Altostratus, Cumulus, Stratocumulus and Cumulonimbus-Nimbostratus). These cloud types are classified with a modified algorithm of Heinle et al. (2010). This cloud type classifying algorithm is based on a set of statistical features describing the color (spectral features) and the texture of an image (textural features) (Wacker et al. (2015)). The calculation of the fractional cloud cover information is based on spectral information of the all-sky camera data. The radiation data are taken from measurements with pyranometers and pyrgeometers at the different stations. A climatology of a whole year of the shortwave, longwave and total cloud radiative effect and its sensitivity to integrated water vapor, cloud cover and cloud type will be calculated for the three above-mentioned stations in Switzerland. For the calculation of the shortwave and longwave cloud radiative effect the corresponding cloud-free reference models developed at PMOD/WRC will be used (Wacker et al. (2013)). References: Heinle, A., A. Macke and A. Srivastav (2010) Automatic cloud classification of whole sky images, Atmospheric Measurement Techniques. Wacker, S., J. Gröbner and L. Vuilleumier (2013) A method to calculate cloud-free long-wave irradiance at the surface based on radiative transfer modeling and temperature lapse rate estimates, Theoretical and Applied Climatology. Wacker, S., J. Gröbner, C. Zysset, L. Diener, P. Tzoumanikis, A. Kazantzidis, L. Vuilleumier, R. Stöckli, S. Nyeki, and N. Kämpfer (2015) Cloud observations in Switzerland using hemispherical sky cameras, Journal of Geophysical Research.

  11. Point clouds segmentation as base for as-built BIM creation

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2015-08-01

    In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement ofmore » all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.« less

  13. Classifying Structures in the ISM with Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher; Goodman, A. A.; Williams, J. P.

    2011-01-01

    The processes which govern molecular cloud evolution and star formation often sculpt structures in the ISM: filaments, pillars, shells, outflows, etc. Because of their morphological complexity, these objects are often identified manually. Manual classification has several disadvantages; the process is subjective, not easily reproducible, and does not scale well to handle increasingly large datasets. We have explored to what extent machine learning algorithms can be trained to autonomously identify specific morphological features in molecular cloud datasets. We show that the Support Vector Machine algorithm can successfully locate filaments and outflows blended with other emission structures. When the objects of interest are morphologically distinct from the surrounding emission, this autonomous classification achieves >90% accuracy. We have developed a set of IDL-based tools to apply this technique to other datasets.

  14. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    PubMed

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  15. Urban forest topographical mapping using UAV LIDAR

    NASA Astrophysics Data System (ADS)

    Putut Ash Shidiq, Iqbal; Wibowo, Adi; Kusratmoko, Eko; Indratmoko, Satria; Ardhianto, Ronni; Prasetyo Nugroho, Budi

    2017-12-01

    Topographical data is highly needed by many parties, such as government institution, mining companies and agricultural sectors. It is not just about the precision, the acquisition time and data processing are also carefully considered. In relation with forest management, a high accuracy topographic map is necessary for planning, close monitoring and evaluating forest changes. One of the solution to quickly and precisely mapped topography is using remote sensing system. In this study, we test high-resolution data using Light Detection and Ranging (LiDAR) collected from unmanned aerial vehicles (UAV) to map topography and differentiate vegetation classes based on height in urban forest area of University of Indonesia (UI). The semi-automatic and manual classifications were applied to divide point clouds into two main classes, namely ground and vegetation. There were 15,806,380 point clouds obtained during the post-process, in which 2.39% of it were detected as ground.

  16. Computation offloading for real-time health-monitoring devices.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid

    2016-08-01

    Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.

  17. Imaging Systems for Size Measurements of Debrisat Fragments

    NASA Technical Reports Server (NTRS)

    Shiotani, B.; Scruggs, T.; Toledo, R.; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.

    2017-01-01

    The overall objective of the DebriSat project is to provide data to update existing standard spacecraft breakup models. One of the key sets of parameters used in these models is the physical dimensions of the fragments (i.e., length, average-cross sectional area, and volume). For the DebriSat project, only fragments with at least one dimension greater than 2 mm are collected and processed. Additionally, a significant portion of the fragments recovered from the impact test are needle-like and/or flat plate-like fragments where their heights are almost negligible in comparison to their other dimensions. As a result, two fragment size categories were defined: 2D objects and 3D objects. While measurement systems are commercially available, factors such as measurement rates, system adaptability, size characterization limitations and equipment costs presented significant challenges to the project and a decision was made to develop our own size characterization systems. The size characterization systems consist of two automated image systems, one referred to as the 3D imaging system and the other as the 2D imaging system. Which imaging system to use depends on the classification of the fragment being measured. Both imaging systems utilize point-and-shoot cameras for object image acquisition and create representative point clouds of the fragments. The 3D imaging system utilizes a space-carving algorithm to generate a 3D point cloud, while the 2D imaging system utilizes an edge detection algorithm to generate a 2D point cloud. From the point clouds, the three largest orthogonal dimensions are determined using a convex hull algorithm. For 3D objects, in addition to the three largest orthogonal dimensions, the volume is computed via an alpha-shape algorithm applied to the point clouds. The average cross-sectional area is also computed for 3D objects. Both imaging systems have automated size measurements (image acquisition and image processing) driven by the need to quickly and accurately measure tens of thousands of debris fragments. Moreover, the automated size measurement reduces potential fragment damage/mishandling and ability for accuracy and repeatability. As the fragment characterization progressed, it became evident that the imaging systems had to be revised. For example, an additional view was added to the 2D imaging system to capture the height of the 2D object. This paper presents the DebriSat project's imaging systems and calculation techniques in detail; from design and development to maturation. The experiences and challenges are also shared.

  18. A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.

    2014-12-01

    A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.

  19. Semantic Labelling of Ultra Dense Mls Point Clouds in Urban Road Corridors Based on Fusing Crf with Shape Priors

    NASA Astrophysics Data System (ADS)

    Yao, W.; Polewski, P.; Krzystek, P.

    2017-09-01

    In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.

  20. Video sensor architecture for surveillance applications.

    PubMed

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  1. Video Sensor Architecture for Surveillance Applications

    PubMed Central

    Sánchez, Jordi; Benet, Ginés; Simó, José E.

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723

  2. Land cover mapping of North and Central America—Global Land Cover 2000

    USGS Publications Warehouse

    Latifovic, Rasim; Zhu, Zhi-Liang

    2004-01-01

    The Land Cover Map of North and Central America for the year 2000 (GLC 2000-NCA), prepared by NRCan/CCRS and USGS/EROS Data Centre (EDC) as a regional component of the Global Land Cover 2000 project, is the subject of this paper. A new mapping approach for transforming satellite observations acquired by the SPOT4/VGTETATION (VGT) sensor into land cover information is outlined. The procedure includes: (1) conversion of daily data into 10-day composite; (2) post-seasonal correction and refinement of apparent surface reflectance in 10-day composite images; and (3) extraction of land cover information from the composite images. The pre-processing and mosaicking techniques developed and used in this study proved to be very effective in removing cloud contamination, BRDF effects, and noise in Short Wave Infra-Red (SWIR). The GLC 2000-NCA land cover map is provided as a regional product with 28 land cover classes based on modified Federal Geographic Data Committee/Vegetation Classification Standard (FGDC NVCS) classification system, and as part of a global product with 22 land cover classes based on Land Cover Classification System (LCCS) of the Food and Agriculture Organisation. The map was compared on both areal and per-pixel bases over North and Central America to the International Geosphere–Biosphere Programme (IGBP) global land cover classification, the University of Maryland global land cover classification (UMd) and the Moderate Resolution Imaging Spectroradiometer (MODIS) Global land cover classification produced by Boston University (BU). There was good agreement (79%) on the spatial distribution and areal extent of forest between GLC 2000-NCA and the other maps, however, GLC 2000-NCA provides additional information on the spatial distribution of forest types. The GLC 2000-NCA map was produced at the continental level incorporating specific needs of the region.

  3. Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data

    NASA Astrophysics Data System (ADS)

    Jiao, Xianfeng; Kovacs, John M.; Shang, Jiali; McNairn, Heather; Walters, Dan; Ma, Baoluo; Geng, Xiaoyuan

    2014-10-01

    The aim of this paper is to assess the accuracy of an object-oriented classification of polarimetric Synthetic Aperture Radar (PolSAR) data to map and monitor crops using 19 RADARSAT-2 fine beam polarimetric (FQ) images of an agricultural area in North-eastern Ontario, Canada. Polarimetric images and field data were acquired during the 2011 and 2012 growing seasons. The classification and field data collection focused on the main crop types grown in the region, which include: wheat, oat, soybean, canola and forage. The polarimetric parameters were extracted with PolSAR analysis using both the Cloude-Pottier and Freeman-Durden decompositions. The object-oriented classification, with a single date of PolSAR data, was able to classify all five crop types with an accuracy of 95% and Kappa of 0.93; a 6% improvement in comparison with linear-polarization only classification. However, the time of acquisition is crucial. The larger biomass crops of canola and soybean were most accurately mapped, whereas the identification of oat and wheat were more variable. The multi-temporal data using the Cloude-Pottier decomposition parameters provided the best classification accuracy compared to the linear polarizations and the Freeman-Durden decomposition parameters. In general, the object-oriented classifications were able to accurately map crop types by reducing the noise inherent in the SAR data. Furthermore, using the crop classification maps we were able to monitor crop growth stage based on a trend analysis of the radar response. Based on field data from canola crops, there was a strong relationship between the phenological growth stage based on the BBCH scale, and the HV backscatter and entropy.

  4. OMMYDCLD: a New A-train Cloud Product that Co-locates OMI and MODIS Cloud and Radiance Parameters onto the OMI Footprint

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Joiner, Joanna; Vasilkov, Alexander; Veefkind, Pepijn; Platnick, Steven; Wind, Galina

    2014-01-01

    Clouds cover approximately 60% of the earth's surface. When obscuring the satellite's field of view (FOV), clouds complicate the retrieval of ozone, trace gases and aerosols from data collected by earth observing satellites. Cloud properties associated with optical thickness, cloud pressure, water phase, drop size distribution (DSD), cloud fraction, vertical and areal extent can also change significantly over short spatio-temporal scales. The radiative transfer models used to retrieve column estimates of atmospheric constituents typically do not account for all these properties and their variations. The OMI science team is preparing to release a new data product, OMMYDCLD, which combines the cloud information from sensors on board two earth observing satellites in the NASA A-Train: Aura/OMI and Aqua/MODIS. OMMYDCLD co-locates high resolution cloud and radiance information from MODIS onto the much larger OMI pixel and combines it with parameters derived from the two other OMI cloud products: OMCLDRR and OMCLDO2. The product includes histograms for MODIS scientific data sets (SDS) provided at 1 km resolution. The statistics of key data fields - such as effective particle radius, cloud optical thickness and cloud water path - are further separated into liquid and ice categories using the optical and IR phase information. OMMYDCLD offers users of OMI data cloud information that will be useful for carrying out OMI calibration work, multi-year studies of cloud vertical structure and in the identification and classification of multi-layer clouds.

  5. Bayesian cloud detection for MERIS, AATSR, and their combination

    NASA Astrophysics Data System (ADS)

    Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.

    2014-11-01

    A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.

  6. Bayesian cloud detection for MERIS, AATSR, and their combination

    NASA Astrophysics Data System (ADS)

    Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.

    2015-04-01

    A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.

  7. Through thick and thin: quantitative classification of photometric observing conditions on Paranal

    NASA Astrophysics Data System (ADS)

    Kerber, Florian; Querel, Richard R.; Neureiter, Bianca; Hanuschik, Reinhard

    2016-07-01

    A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer is used to monitor sky conditions over ESO's Paranal observatory. It provides measurements of precipitable water vapour (PWV) at 183 GHz, which are being used in Service Mode for scheduling observations that can take advantage of favourable conditions for infrared (IR) observations. The instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. It is capable of detecting cold and thin, even sub-visual, cirrus clouds. We present a diagnostic diagram that, based on a sophisticated time series analysis of these IR sky brightness data, allows for the automatic and quantitative classification of photometric observing conditions over Paranal. The method is highly sensitive to the presence of even very thin clouds but robust against other causes of sky brightness variations. The diagram has been validated across the complete range of conditions that occur over Paranal and we find that the automated process provides correct classification at the 95% level. We plan to develop our method into an operational tool for routine use in support of ESO Science Operations.

  8. UBV Photometry of Selected Eclipsing Binaries in the Magellanic Clouds.

    NASA Astrophysics Data System (ADS)

    Davidge, Timothy John

    1987-12-01

    UBV photoelectric observations of five eclipsing binaries in the Magellanic Clouds are presented and discussed in detail. The systems studied are HV162O and HV1669 in the Small Magellanic Cloud and HV2241, HV2765, and HV5943 in the Large Magellanic Cloud. Classification spectra indicate that the components of these systems are of spectral type late O or early B. The systems are located in moderately crowded areas. Therefore, CCD observations were used to construct models of the star fields around the variables. These were used to correct the photoelectric measurements for contamination. Light curve solutions were found with the Wilson -Devinney program. A two dimensional search of parameter space involving the mass ratio and the surface potential of the secondary component was employed. This procedure was tested by numerical simulation and was found to predict the light curve elements, including the mass ratios, within their estimated uncertainties. It appears likely that none of the systems are in contact, a surprising result considering the high frequency of early type contact binaries in the solar neighborhood. The light curve solutions were then used to compute the absolute dimensions of the components. Only one system, HV2241, has a radial velocity curve, allowing its absolute dimensions to be well established. Less accurate absolute dimensions were calculated for the remaining systems using photometric information. The components were then placed on H-R diagrams and compared with theoretical models of stellar evolution. The positions of the components on these diagrams appear to support the existence of convective core overshooting. The evolutionary status of the systems was also discussed. The system with the most accurately determined absolute dimensions, HV2241, appears to have undergone, or is nearing the end of, Case A mass transfer. Two other systems, HV1620 and HV1669, may also be involved in mass transfer. Finally, the use of eclipsing binaries as distance indicators was investigated. The distance modulus of the LMC was computed in two ways. One approach used the absolute dimensions found with the radial velocity data while the other employed the method of photometric parallaxes. The latter technique was also used to calculate the distance modulus of the SMC.

  9. Behavior of predicted convective clouds and precipitation in the high-resolution Unified Model over the Indian summer monsoon region

    NASA Astrophysics Data System (ADS)

    Jayakumar, A.; Sethunadh, Jisesh; Rakhi, R.; Arulalan, T.; Mohandas, Saji; Iyengar, Gopal R.; Rajagopal, E. N.

    2017-05-01

    National Centre for Medium Range Weather Forecasting high-resolution regional convective-scale Unified Model with latest tropical science settings is used to evaluate vertical structure of cloud and precipitation over two prominent monsoon regions: Western Ghats (WG) and Monsoon Core Zone (MCZ). Model radar reflectivity generated using Cloud Feedback Model Intercomparison Project Observation Simulator Package along with CloudSat profiling radar reflectivity is sampled for an active synoptic situation based on a new method using Budyko's index of turbulence (BT). Regime classification based on BT-precipitation relationship is more predominant during the active monsoon period when convective-scale model's resolution increases from 4 km to 1.5 km. Model predicted precipitation and vertical distribution of hydrometeors are found to be generally in agreement with Global Precipitation Measurement products and BT-based CloudSat observation, respectively. Frequency of occurrence of radar reflectivity from model implies that the low-level clouds below freezing level is underestimated compared to the observations over both regions. In addition, high-level clouds in the model predictions are much lesser over WG than MCZ.

  10. Detection of long duration cloud contamination in hyper-temporal NDVI imagery

    NASA Astrophysics Data System (ADS)

    Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.

    2012-04-01

    NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.

  11. A Systematic Literature Mapping of Risk Analysis of Big Data in Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Bee Yusof Ali, Hazirah; Marziana Abdullah, Lili; Kartiwi, Mira; Nordin, Azlin; Salleh, Norsaremah; Sham Awang Abu Bakar, Normi

    2018-05-01

    This paper investigates previous literature that focusses on the three elements: risk assessment, big data and cloud. We use a systematic literature mapping method to search for journals and proceedings. The systematic literature mapping process is utilized to get a properly screened and focused literature. With the help of inclusion and exclusion criteria, the search of literature is further narrowed. Classification helps us in grouping the literature into categories. At the end of the mapping, gaps can be seen. The gap is where our focus should be in analysing risk of big data in cloud computing environment. Thus, a framework of how to assess the risk of security, privacy and trust associated with big data and cloud computing environment is highly needed.

  12. Automated cloud screening of AVHRR imagery using split-and-merge clustering

    NASA Technical Reports Server (NTRS)

    Gallaudet, Timothy C.; Simpson, James J.

    1991-01-01

    Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.

  13. Altitude determination and descriptive analysis of clouds on ERTS-1 multispectral photography. [Venezuela

    NASA Technical Reports Server (NTRS)

    Albrizzio, C.; Andressen, A.

    1974-01-01

    A simple method to determine the approximate altitude of clouds is described, with the objective of refining their classification using only marginal data from the photographs. Results of the application of this method on photographs of the Goajira Peninsula, Paraguana Peninsula and the Central Coast of Venezuela are presented. Here, the altitudes computed are used to classify clouds and to identify the genus of others without typical form. Instability of air masses through clouds vertical development, and wind direction as well as other local climatic characteristics such as moisture content, loci of condensation, area, etc. are determined using repetitive coverage for the time interval of the photography. Applications for the regional and urban planning (including airport location and flights schedule) and natural resources evaluation are suggested.

  14. Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data

    PubMed Central

    Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc

    2017-01-01

    Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189

  15. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.

  16. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning

    PubMed Central

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-01-01

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood. PMID:28294963

  17. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    NASA Astrophysics Data System (ADS)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  18. 2D Radiative Processes Near Cloud Edges

    NASA Technical Reports Server (NTRS)

    Varnai, T.

    2012-01-01

    Because of the importance and complexity of dynamical, microphysical, and radiative processes taking place near cloud edges, the transition zone between clouds and cloud free air has been the subject of intense research both in the ASR program and in the wider community. One challenge in this research is that the one-dimensional (1D) radiative models widely used in both remote sensing and dynamical simulations become less accurate near cloud edges: The large horizontal gradients in particle concentrations imply that accurate radiative calculations need to consider multi-dimensional radiative interactions among areas that have widely different optical properties. This study examines the way the importance of multidimensional shortwave radiative interactions changes as we approach cloud edges. For this, the study relies on radiative simulations performed for a multiyear dataset of clouds observed over the NSA, SGP, and TWP sites. This dataset is based on Microbase cloud profiles as well as wind measurements and ARM cloud classification products. The study analyzes the way the difference between 1D and 2D simulation results increases near cloud edges. It considers both monochromatic radiances and broadband radiative heating, and it also examines the influence of factors such as cloud type and height, and solar elevation. The results provide insights into the workings of radiative processes and may help better interpret radiance measurements and better estimate the radiative impacts of this critical region.

  19. Classification of Arctic, midlatitude and tropical clouds in the mixed-phase temperature regime

    NASA Astrophysics Data System (ADS)

    Costa, Anja; Meyer, Jessica; Afchine, Armin; Luebke, Anna; Günther, Gebhard; Dorsey, James R.; Gallagher, Martin W.; Ehrlich, Andre; Wendisch, Manfred; Baumgardner, Darrel; Wex, Heike; Krämer, Martina

    2017-10-01

    The degree of glaciation of mixed-phase clouds constitutes one of the largest uncertainties in climate prediction. In order to better understand cloud glaciation, cloud spectrometer observations are presented in this paper, which were made in the mixed-phase temperature regime between 0 and -38 °C (273 to 235 K), where cloud particles can either be frozen or liquid. The extensive data set covers four airborne field campaigns providing a total of 139 000 1 Hz data points (38.6 h within clouds) over Arctic, midlatitude and tropical regions. We develop algorithms, combining the information on number concentration, size and asphericity of the observed cloud particles to classify four cloud types: liquid clouds, clouds in which liquid droplets and ice crystals coexist, fully glaciated clouds after the Wegener-Bergeron-Findeisen process and clouds where secondary ice formation occurred. We quantify the occurrence of these cloud groups depending on the geographical region and temperature and find that liquid clouds dominate our measurements during the Arctic spring, while clouds dominated by the Wegener-Bergeron-Findeisen process are most common in midlatitude spring. The coexistence of liquid water and ice crystals is found over the whole mixed-phase temperature range in tropical convective towers in the dry season. Secondary ice is found at midlatitudes at -5 to -10 °C (268 to 263 K) and at higher altitudes, i.e. lower temperatures in the tropics. The distribution of the cloud types with decreasing temperature is shown to be consistent with the theory of evolution of mixed-phase clouds. With this study, we aim to contribute to a large statistical database on cloud types in the mixed-phase temperature regime.

  20. Red and nebulous objects in dark clouds - A survey

    NASA Technical Reports Server (NTRS)

    Cohen, M.

    1980-01-01

    A search on the NGS-PO Sky Survey photographs has revealed 150 interesting nebulous and/or red objects, mostly lying in dark clouds and not previously catalogued. Spectral classifications are presented for 55 objects. These indicate a small number of new members of the class of Herbig-Haro objects, a significant number of new T Tauri stars, and a few emission-line hot stars. It is argued that hot, high-mass stars form preferentially in the dense cores of dark clouds. The possible symbiosis of high and low mass stars is considered. A new morphology class is defined for cometary nebulae, in which a star lies on the periphery of a nebulous ring.

  1. THE EFFECT OF CLOUD FRACTION ON THE RADIATIVE ENERGY BUDGET: The Satellite-Based GEWEX-SRB Data vs. the Ground-Based BSRN Measurements

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Stackhouse, P. W.; Gupta, S. K.; Cox, S. J.; Mikovitz, J. C.; Nasa Gewex Srb

    2011-12-01

    The NASA GEWEX-SRB (Global Energy and Water cycle Experiment - Surface Radiation Budget) project produces and archives shortwave and longwave atmospheric radiation data at the top of the atmosphere (TOA) and the Earth's surface. The archive holds uninterrupted records of shortwave/longwave downward/upward radiative fluxes at 1 degree by 1 degree resolution for the entire globe. The latest version in the archive, Release 3.0, is available as 3-hourly, daily and monthly means, spanning 24.5 years from July 1983 to December 2007. Primary inputs to the models used to produce the data include: shortwave and longwave radiances from International Satellite Cloud Climatology Project (ISCCP) pixel-level (DX) data, cloud and surface properties derived therefrom, temperature and moisture profiles from GEOS-4 reanalysis product obtained from the NASA Global Modeling and Assimilation Office (GMAO), and column ozone amounts constituted from Total Ozone Mapping Spectrometer (TOMS), TIROS Operational Vertical Sounder (TOVS) archives, and Stratospheric Monitoring-group's Ozone Blended Analysis (SMOBA), an assimilation product from NOAA's Climate Prediction Center. The data in the archive have been validated systemically against ground-based measurements which include the Baseline Surface Radiation Network (BSRN) data, the World Radiation Data Centre (WRDC) data, and the Global Energy Balance Archive (GEBA) data, and generally good agreement has been achieved. In addition to all-sky radiative fluxes, the output data include clear-sky fluxes, cloud optical depth, cloud fraction and so on. The BSRN archive also includes observations that can be used to derive the cloud fraction, which provides a means for analyzing and explaining the SRB-BSRN flux differences. In this paper, we focus on the effect of cloud fraction on the surface shortwave flux and the level of agreement between the satellite-based SRB data and the ground-based BSRN data. The satellite and BSRN employ different measuring methodologies and thus result in data representing means on dramatically different spatial scales. Therefore, the satellite-based and ground-based measurements are not expected to agree all the time, especially under skies with clouds. The flux comparisons are made under different cloud fractions, and it is found that the SRB-BSRN radiative flux discrepancies can be explained to a certain extent by the SRB-BSRN cloud fraction discrepancies. Apparently, cloud fraction alone cannot completely define the role of clouds in radiation transfer. Further studies need to incorporate the classification of cloud types, altitudes, cloud optical depths and so on.

  2. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less

  3. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    NASA Astrophysics Data System (ADS)

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; Turner, David D.; Eloranta, Edwin W.

    2017-06-01

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookup table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation (R2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21 µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.

  4. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    DOE PAGES

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; ...

    2017-06-09

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less

  5. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  6. Polar winter cloud depolarization measurements with the CANDAC Rayleigh-Mie-Raman Lidar

    NASA Astrophysics Data System (ADS)

    McCullough, E. M.; Nott, G. J.; Duck, T. J.; Sica, R. J.; Doyle, J. G.; Pike-thackray, C.; Drummond, J. R.

    2011-12-01

    Clouds introduce a significant positive forcing to the Arctic radiation budget and this is strongest during the polar winter when shortwave radiation is absent (Intrieri et al., 2002). The amount of forcing depends on the occurrence probability and optical depth of the clouds as well as the cloud particle phase (Ebert and Curry 1992). Mixed-phase clouds are particularly complex as they involve interactions between three phases of water (vapour, liquid and ice) coexisting in the same cloud. Although significant progress has been made in characterizing wintertime Arctic clouds (de Boer et al., 2009 and 2011), there is considerable variability in the relative abundance of particles of each phase, in the morphology of solid particles, and in precipitation rates depending on the meteorology at the time. The Canadian Network for the Detection of Atmospheric Change (CANDAC) Rayleigh-Mie-Raman Lidar (CRL) was installed in the Canadian High Arctic at Eureka, Nunavut (80°N, 86°W) in 2008-2009. The remotely-operated system began with measurement capabilities for multi-wavelength aerosol extinction, water vapour mixing ratio, and tropospheric temperature profiles, as well as backscatter cross section coefficient and colour ratio. In 2010, a new depolarization channel was added. The capability to measure the polarization state of the return signal allows the characterization of the cloud in terms of liquid and ice water content, enabling the lidar to probe all three phases of water in these clouds. Lidar depolarization results from 2010 and 2011 winter clouds at Eureka will be presented, with a focus on differences in downwelling radiation between mixed phase clouds and ice clouds. de Boer, G., E.W. Eloranta, and M.D. Shupe (2009), Arctic mixed-phase stratiform cloud properties from multiple years of surface-based measurements at two high-latitude locations, Journal of Atmospheric Sciences, 66 (9), 2874-2887. de Boer, G., H. Morrison, M. D. Shupe, and R. Hildner (2011), Evidence of liquid dependent ice nucleation in high-latitude stratiform clouds from surface remote sensors, Geophysical Research Letters, 38, L01803. Ebert, EE and J.A .Curry (1992), A parameterization of ice cloud optical properties for climate models, Journal of Geophysical Research 97:3831-3836. Intrieri JM, Fairall CW, Shupe MD, Persson POG, Andreas EL, Guest PS, Moritz RE. 2002. An annual cycle of Arctic surface cloud forcing at SHEBA. Journal of Geophysical Research 107 NO. C10, 8039 . Noel, V., H. Chepfer, M. Haeffelin, and Y. Morille (2006), Classification of ice crystal shapes in midlatitude ice clouds from three years of lidar observations over the SIRTA observatory. Journal of the Atmospheric Sciences, 63:2978 - 2991.

  7. Stand-off CWA imaging system: second sight MS

    NASA Astrophysics Data System (ADS)

    Bernascolle, Philippe F.; Elichabe, Audrey; Fervel, Franck; Haumonté, Jean-Baptiste

    2012-06-01

    In recent years, several manufactures of IR imaging devices have launched commercial models applicable to a wide range of chemical species. These cameras are rugged and sufficiently sensitive to detect low concentrations of toxic and combustible gases. Bertin Technologies, specialized in the design and supply of innovating systems for industry, defense and health, has developed a stand-off gas imaging system using a multi-spectral infrared imaging technology. With this system, the gas cloud size, localization and evolution can be displayed in real time. This technology was developed several years ago in partnership with the CEB, a French MoD CBRN organization. The goal was to meet the need for early warning caused by a chemical threat. With a night & day efficiency of up to 5 km, this process is able to detect Chemical Warfare Agents (CWA), critical Toxic Industrial Compounds (TIC) and also flammable gases. The system has been adapted to detect industrial spillage, using off-the-shelf uncooled infrared cameras, allowing 24/7 surveillance without costly frequent maintenance. The changes brought to the system are in compliance with Military Specifications (MS) and primarily focus on the signal processing improving the classification of the detected products and on the simplification of the Human Machine Interface (HMI). Second Sight MS is the only mass produced, passive stand-off CWA imaging system with a wide angle (up to 60°) already used by several regular armies around the world. This paper examines this IR gas imager performance when exposed to several CWA, TIC and simulant compounds. First, we will describe the Second Sight MS system. The theory of gas detection, visualization and classification functions has already been described elsewhere, so we will just summarize it here. We will then present the main topic of this paper which is the results of the tests done in laboratory on live agents and in open field on simulant. The sensitivity threshold of the camera measured in laboratory, on some CWA (G, H agents...) and TIC (ammonia, sulfur dioxide...) will be given. The result of the detection and visualization of a gas cloud in open field testing for some simulants (DMMP, SF6) at a far distance will be also shown.

  8. A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing

    PubMed Central

    Abdul Wahab, Ainuddin Wahid; Han, Qi; Bin Abdul Rahman, Zulkanain

    2014-01-01

    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC. PMID:25097880

  9. A comprehensive review on adaptability of network forensics frameworks for mobile cloud computing.

    PubMed

    Khan, Suleman; Shiraz, Muhammad; Wahab, Ainuddin Wahid Abdul; Gani, Abdullah; Han, Qi; Rahman, Zulkanain Bin Abdul

    2014-01-01

    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.

  10. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  11. Comparison of Cirrus Cloud Models: A Project of the GEWEX Cloud System Study (GCSS) Working Group on Cirrus Cloud Systems

    NASA Technical Reports Server (NTRS)

    Starr, David O'C.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus M.; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric

    2000-01-01

    The GEWEX Cloud System Study (GCSS, GEWEX is the Global Energy and Water Cycle Experiment) is a community activity aiming to promote development of improved cloud parameterizations for application in the large-scale general circulation models (GCMs) used for climate research and for numerical weather prediction. The GCSS strategy is founded upon the use of cloud-system models (CSMs). These are "process" models with sufficient spatial and temporal resolution to represent individual cloud elements, but spanning a wide range of space and time scales to enable statistical analysis of simulated cloud systems. GCSS also employs single-column versions of the parametric cloud models (SCMs) used in GCMs. GCSS has working groups on boundary-layer clouds, cirrus clouds, extratropical layer cloud systems, precipitating deep convective cloud systems, and polar clouds.

  12. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    PubMed

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  13. Automated Classification of Heritage Buildings for As-Built Bim Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Vergauwen, M.; Van Genechten, B.

    2017-08-01

    Semantically rich three dimensional models such as Building Information Models (BIMs) are increasingly used in digital heritage. They provide the required information to varying stakeholders during the different stages of the historic buildings life cyle which is crucial in the conservation process. The creation of as-built BIM models is based on point cloud data. However, manually interpreting this data is labour intensive and often leads to misinterpretations. By automatically classifying the point cloud, the information can be proccesed more effeciently. A key aspect in this automated scan-to-BIM process is the classification of building objects. In this research we look to automatically recognise elements in existing buildings to create compact semantic information models. Our algorithm efficiently extracts the main structural components such as floors, ceilings, roofs, walls and beams despite the presence of significant clutter and occlusions. More specifically, Support Vector Machines (SVM) are proposed for the classification. The algorithm is evaluated using real data of a variety of existing buildings. The results prove that the used classifier recognizes the objects with both high precision and recall. As a result, entire data sets are reliably labelled at once. The approach enables experts to better document and process heritage assets.

  14. Lidar Cloud Detection with Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Cromwell, E.; Flynn, D.

    2017-12-01

    The vertical distribution of clouds from active remote sensing instrumentation is a widely used data product from global atmospheric measuring sites. The presence of clouds can be expressed as a binary cloud mask and is a primary input for climate modeling efforts and cloud formation studies. Current cloud detection algorithms producing these masks do not accurately identify the cloud boundaries and tend to oversample or over-represent the cloud. This translates as uncertainty for assessing the radiative impact of clouds and tracking changes in cloud climatologies. The Atmospheric Radiation Measurement (ARM) program has over 20 years of micro-pulse lidar (MPL) and High Spectral Resolution Lidar (HSRL) instrument data and companion automated cloud mask product at the mid-latitude Southern Great Plains (SGP) and the polar North Slope of Alaska (NSA) atmospheric observatory. Using this data, we train a fully convolutional network (FCN) with semi-supervised learning to segment lidar imagery into geometric time-height cloud locations for the SGP site and MPL instrument. We then use transfer learning to train a FCN for (1) the MPL instrument at the NSA site and (2) for the HSRL. In our semi-supervised approach, we pre-train the classification layers of the FCN with weakly labeled lidar data. Then, we facilitate end-to-end unsupervised pre-training and transition to fully supervised learning with ground truth labeled data. Our goal is to improve the cloud mask accuracy and precision for the MPL instrument to 95% and 80%, respectively, compared to the current cloud mask algorithms of 89% and 50%. For the transfer learning based FCN for the HSRL instrument, our goal is to achieve a cloud mask accuracy of 90% and a precision of 80%.

  15. Comparison of Cirrus Cloud Models: A Project of the GEWEX Cloud System Study (GCSS) Working Group on Cirrus Cloud Systems

    NASA Technical Reports Server (NTRS)

    Starr, David OC.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus M.; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric; Khvorostyanov, Vitaly; hide

    2000-01-01

    The GEWEX Cloud System Study (GCSS, GEWEX is the Global Energy and Water Cycle Experiment) is a community activity aiming to promote development of improved cloud parameterizations for application in the large-scale general circulation models (GCMs) used for climate research and for numerical weather prediction (Browning et al, 1994). The GCSS strategy is founded upon the use of cloud-system models (CSMs). These are "process" models with sufficient spatial and temporal resolution to represent individual cloud elements, but spanning a wide range of space and time scales to enable statistical analysis of simulated cloud systems. GCSS also employs single-column versions of the parametric cloud models (SCMs) used in GCMs. GCSS has working groups on boundary-layer clouds, cirrus clouds, extratropical layer cloud systems, precipitating deep convective cloud systems, and polar clouds.

  16. Large Scale Crop Mapping in Ukraine Using Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Shelestov, A.; Lavreniuk, M. S.; Kussul, N.

    2016-12-01

    There are no globally available high resolution satellite-derived crop specific maps at present. Only coarse-resolution imagery (> 250 m spatial resolution) has been utilized to derive global cropland extent. In 2016 we are going to carry out a country level demonstration of Sentinel-2 use for crop classification in Ukraine within the ESA Sen2-Agri project. But optical imagery can be contaminated by cloud cover that makes it difficult to acquire imagery in an optimal time range to discriminate certain crops. Due to the Copernicus program since 2015, a lot of Sentinel-1 SAR data at high spatial resolution is available for free for Ukraine. It allows us to use the time series of SAR data for crop classification. Our experiment for one administrative region in 2015 showed much higher crop classification accuracy with SAR data than with optical only time series [1, 2]. Therefore, in 2016 within the Google Earth Engine Research Award we use SAR data together with optical ones for large area crop mapping (entire territory of Ukraine) using cloud computing capabilities available at Google Earth Engine (GEE). This study compares different classification methods for crop mapping for the whole territory of Ukraine using data and algorithms from GEE. Classification performance assessed using overall classification accuracy, Kappa coefficients, and user's and producer's accuracies. Also, crop areas from derived classification maps compared to the official statistics [3]. S. Skakun et al., "Efficiency assessment of multitemporal C-band Radarsat-2 intensity and Landsat-8 surface reflectance satellite imagery for crop classification in Ukraine," IEEE Journal of Selected Topics in Applied Earth Observ. and Rem. Sens., 2015, DOI: 10.1109/JSTARS.2015.2454297. N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "The use of satellite SAR imagery to crop classification in Ukraine within JECAM project," IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp.1497-1500, 13-18 July 2014, Quebec City, Canada. F.J. Gallego, N. Kussul, S. Skakun, O. Kravchenko, A. Shelestov, O. Kussul, "Efficiency assessment of using satellite data for crop area estimation in Ukraine," International Journal of Applied Earth Observation and Geoinformation vol. 29, pp. 22-30, 2014.

  17. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    NASA Astrophysics Data System (ADS)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.

  18. Small Infrared Target Detection by Region-Adaptive Clutter Rejection for Sea-Based Infrared Search and Track

    PubMed Central

    Kim, Sungho; Lee, Joohyoung

    2014-01-01

    This paper presents a region-adaptive clutter rejection method for small target detection in sea-based infrared search and track. In the real world, clutter normally generates many false detections that impede the deployment of such detection systems. Incoming targets (missiles, boats, etc.) can be located in the sky, horizon and sea regions, which have different types of clutters, such as clouds, a horizontal line and sea-glint. The characteristics of regional clutter were analyzed after the geometrical analysis-based region segmentation. The false detections caused by cloud clutter were removed by the spatial attribute-based classification. Those by the horizontal line were removed using the heterogeneous background removal filter. False alarms by sun-glint were rejected using the temporal consistency filter, which is the most difficult part. The experimental results of the various cluttered background sequences show that the proposed region adaptive clutter rejection method produces fewer false alarms than that of the mean subtraction filter (MSF) with an acceptable degradation detection rate. PMID:25054633

  19. Waves on White: Ice or Clouds?

    NASA Technical Reports Server (NTRS)

    2005-01-01

    As it passed over Antarctica on December 16, 2004, the Multi-angle Imaging SpectroRadiometer (MISR) on NASA's Terra satellite captured this image showing a wavy pattern in a field of white. At most other latitudes, such wavy patterns would likely indicate stratus or stratocumulus clouds. MISR, however, saw something different. By using information from several of its multiple cameras (each of which views the Earth's surface from a different angle), MISR was able to tell that what looked like a wavy cloud pattern was actually a wavy pattern on the ice surface. One of MISR's cloud classification products, the Angular Signature Cloud Mask (ASCM), correctly identified the rippled area as being at the surface.

    In this image pair, the view from MISR's most oblique backward-viewing camera is on the left, and the color-coded image on the right shows the results of the ASCM. The colors represent the level of certainty in the classification. Areas that were classed as cloudy with high confidence are white, and areas where the confidence was lower are yellow; dark blue shows confidently clear areas, while light blue indicates clear with lower confidence. The ASCM works particularly well at detecting clouds over snow and ice, but also works well over ocean and land. The rippled area on the surface which could have been mistaken for clouds are actually sastrugi -- long wavelike ridges of snow formed by the wind and found on the polar plains. Usually sastrugi are only several centimeters high and several meters apart, but large portions of East Antarctica are covered by mega-sastrugi ice fields, with dune-like features as high as four meters separated by two to five kilometers. The mega-sastrugi fields are a result of unusual snow accumulation and redistribution processes influenced by the prevailing winds and climate conditions. MISR imagery indicates that these mega sastrugi were stationary features between 2002 and 2004.

    Being able to distinguish clouds from snow or ice-covered surfaces is important in order to adequately characterize the radiation balance of the polar regions. However, detecting clouds using spaceborne detectors over snow and ice surfaces is notoriously difficult, because the surface may often be as bright and as cold as the overlying clouds, and because polar atmospheric temperature inversions sometimes mean that clouds are warmer than the underlying snow or ice surface. The Angular Signature Cloud Mask (ASCM) was developed based on the Band-Differenced Angular Signature (BDAS) approach, introduced by Di Girolamo and Davies (1994) and updated for MISR application by Di Girolamo and Wilson (2003). BDAS uses both spectral and angular changes in reflectivity to distinguish clouds from the background, and the ASCM calculates the difference between the 446 and 866 nanometer reflectances at MISR's two most oblique cameras that view forward-scattered light. New land thresholds for the ASCM are planned for delivery later this year.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. This image area covers about 277 kilometers by 421 kilometers in the interior of the East Antarctic ice sheet. These data products were generated from a portion of the imagery acquired during Terra orbit 26584 and utilize data from within blocks 159 to 161 within World Reference System-2 path 63.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  20. Filling of Cloud-Induced Gaps for Land Use and Land Cover Classifications Around Refugee Camps

    NASA Astrophysics Data System (ADS)

    Braun, Andreas; Hagensieker, Ron; Hochschild, Volker

    2016-08-01

    Clouds cover is one of the main constraints in the field of optical remote sensing. Especially the use of multispectral imagery is affected by either fully obscured data or parts of the image which remain unusable. This study compares four algorithms for the filling of cloud induced gaps in classified land cover products based on Markov Random Fields (MRF), Random Forest (RF), Closest Spectral Fit (CSF) operators. They are tested on a classified image of Sentinel-2 where artificial clouds are filled by information derived from a scene of Sentinel-1. The approaches rely on different mathematical principles and therefore produced results varying in both pattern and quality. Overall accuracies for the filled areas range from 57 to 64 %. Best results are achieved by CSF, however some classes (e.g. sands and grassland) remain critical through all approaches.

  1. In situ measurements of angular-dependent light scattering by aerosols over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Reed Espinosa, W.; Vanderlei Martins, J.; Remer, Lorraine A.; Puthukkudy, Anin; Orozco, Daniel; Dolgos, Gergely

    2018-03-01

    This work provides a synopsis of aerosol phase function (F11) and polarized phase function (F12) measurements made by the Polarized Imaging Nephelometer (PI-Neph) during the Studies of Emissions, Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) and the Deep Convection Clouds and Chemistry (DC3) field campaigns. In order to more easily explore this extensive dataset, an aerosol classification scheme is developed that identifies the different aerosol types measured during the deployments. This scheme makes use of ancillary data that include trace gases, chemical composition, aerodynamic particle size and geographic location, all independent of PI-Neph measurements. The PI-Neph measurements are then grouped according to their ancillary data classifications and the resulting scattering patterns are examined in detail. These results represent the first published airborne measurements of F11 and -F12/F11 for many common aerosol types. We then explore whether PI-Neph light-scattering measurements alone are sufficient to reconstruct the results of this ancillary data classification algorithm. Principal component analysis (PCA) is used to reduce the dimensionality of the multi-angle PI-Neph scattering data and the individual measurements are examined as a function of ancillary data classification. Clear clustering is observed in the PCA score space, corresponding to the ancillary classification results, suggesting that, indeed, a strong link exists between the angular-scattering measurements and the aerosol type or composition. Two techniques are used to quantify the degree of clustering and it is found that in most cases the results of the ancillary data classification can be predicted from PI-Neph measurements alone with better than 85 % recall. This result both emphasizes the validity of the ancillary data classification as well as the PI-Neph's ability to distinguish common aerosol types without additional information.

  2. Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.

    2010-01-01

    The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with ocean, particularly in the tropics and summer hemisphere.

  3. Impact of Doctor Car with Mobile Cloud ECG in reducing door-to- balloon time of Japanese ST-elevation myocardial infarction patients.

    PubMed

    Takeuchi, Ichiro; Fujita, Hideo; Yanagisawa, Tomoyoshi; Sato, Nobuhiro; Mizutani, Tomohiro; Hattori, Jun; Asakuma, Sadataka; Yamaya, Tatsuhiro; Inagaki, Taito; Kataoka, Yuichi; Ohe, Kazuhiko; Ako, Junya; Asari, Yasushi

    2015-01-01

    Early reperfusion by percutaneous coronary intervention (PCI) is the current standard therapy for ST-elevation myocardial infarction (STEMI). To achieve better prognoses for these patients, reducing the door-to-balloon time is essential. As we reported previously, the Kitasato University Hospital Doctor Car (DC), an ambulance with a physician on board, is equipped with a novel mobile cloud 12-lead ECG system. Between September 2011 and August 2013, there were 260 emergency dispatches of our Doctor Car, of which 55 were for suspected acute myocardial infarction with chest pain and cold sweat. Among these 55 calls, 32 patients received emergent PCI due to STEMI (DC Group). We compared their data with those of 76 STEMI patients who were transported directly to our hospital by ambulance around the same period (Non-DC Group). There were no differences in patient age, gender, underlying diseases, or Killip classification between the two groups. The door-to-balloon time in the DC group was 56.1 ± 13.7 minutes and 74.0 ± 14.1 minutes in the Non-DC Group (P < 0.0001). Maximum levels of CPK were 2899 ± 308 and 2876 ± 269 IU/L (P = 0.703), and those of CK-MB were 292 ± 360 and 295 ± 284 ng/mL (P = 0.423), respectively, in the 2 groups. The Doctor Car system with the Mobile Cloud ECG was useful for reducing the door-to-balloon time.

  4. Applying local binary patterns in image clustering problems

    NASA Astrophysics Data System (ADS)

    Skorokhod, Nikolai N.; Elizarov, Alexey I.

    2017-11-01

    Due to the fact that the cloudiness plays a critical role in the Earth radiative balance, the study of the distribution of different types of clouds and their movements is relevant. The main sources of such information are artificial satellites that provide data in the form of images. The most commonly used method of solving tasks of processing and classification of images of clouds is based on the description of texture features. The use of a set of local binary patterns is proposed to describe the texture image.

  5. On the validation of cloud parametrization schemes in numerical atmospheric models with satellite data from ISCCP

    NASA Astrophysics Data System (ADS)

    Meinke, I.

    2003-04-01

    A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.

  6. Marine Stratocumulus Cloud Fields off the Coast of Southern California Observed Using LANDSAT Imagery. Part II: Textural Analysis.

    NASA Astrophysics Data System (ADS)

    Welch, R. M.; Sengupta, S. K.; Kuo, K. S.

    1988-04-01

    Statistical measures of the spatial distributions of gray levels (cloud reflectivities) are determined for LANDSAT Multispectral Scanner digital data. Textural properties for twelve stratocumulus cloud fields, seven cumulus fields, and two cirrus fields are examined using the Spatial Gray Level Co-Occurrence Matrix method. The co-occurrence statistics are computed for pixel separations ranging from 57 m to 29 km and at angles of 0°, 45°, 90° and 135°. Nine different textual measures are used to define the cloud field spatial relationships. However, the measures of contrast and correlation appear to be most useful in distinguishing cloud structure.Cloud field macrotexture describes general cloud field characteristics at distances greater than the size of typical cloud elements. It is determined from the spatial asymptotic values of the texture measures. The slope of the texture curves at small distances provides a measure of the microtexture of individual cloud cells. Cloud fields composed primarily of small cells have very steep slopes and reach their asymptotic values at short distances from the origin. As the cells composing the cloud field grow larger, the slope becomes more gradual and the asymptotic distance increases accordingly. Low asymptotic values of correlation show that stratocumulus cloud fields have no large scale organized structure.Besides the ability to distinguish cloud field structure, texture appears to be a potentially valuable tool in cloud classification. Stratocumulus clouds are characterized by low values of angular second moment and large values of entropy. Cirrus clouds appear to have extremely low values of contrast, low values of entropy, and very large values of correlation.Finally, we propose that sampled high spatial resolution satellite data be used in conjunction with coarser resolution operational satellite data to detect and identify cloud field structure and directionality and to locate regions of subresolution scale cloud contamination.

  7. A Ground-Based Doppler Radar and Micropulse Lidar Forward Simulator for GCM Evaluation of Arctic Mixed-Phase Clouds: Moving Forward Towards an Apples-to-apples Comparison of Hydrometeor Phase

    NASA Astrophysics Data System (ADS)

    Lamer, K.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.

    2017-12-01

    An important aspect of evaluating Artic cloud representation in a general circulation model (GCM) consists of using observational benchmarks which are as equivalent as possible to model output in order to avoid methodological bias and focus on correctly diagnosing model dynamical and microphysical misrepresentations. However, current cloud observing systems are known to suffer from biases such as limited sensitivity, and stronger response to large or small hydrometeors. Fortunately, while these observational biases cannot be corrected, they are often well understood and can be reproduced in forward simulations. Here a ground-based millimeter wavelength Doppler radar and micropulse lidar forward simulator able to interface with output from the Goddard Institute for Space Studies (GISS) ModelE GCM is presented. ModelE stratiform hydrometeor fraction, mixing ratio, mass-weighted fall speed and effective radius are forward simulated to vertically-resolved profiles of radar reflectivity, Doppler velocity and spectrum width as well as lidar backscatter and depolarization ratio. These forward simulated fields are then compared to Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) ground-based observations to assess cloud vertical structure (CVS). Model evalution of Arctic mixed-phase cloud would also benefit from hydrometeor phase evaluation. While phase retrieval from synergetic observations often generates large uncertainties, the same retrieval algorithm can be applied to observed and forward-simulated radar-lidar fields, thereby producing retrieved hydrometeor properties with potentially the same uncertainties. Comparing hydrometeor properties retrieved in exactly the same way aims to produce the best apples-to-apples comparisons between GCM ouputs and observations. The use of a comprenhensive ground-based forward simulator coupled with a hydrometeor classification retrieval algorithm provides a new perspective for GCM evaluation of Arctic mixed-phase clouds from the ground where low-level supercooled liquid layer are more easily observed and where additional environmental properties such as cloud condensation nuclei are quantified. This should help assist in choosing between several possible diagnostic ice nucleation schemes for ModelE stratiform cloud.

  8. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  9. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  10. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    PubMed Central

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  11. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  12. Small Negative Cloud-to-Ground Lightning Reports at the KSC-ER

    NASA Technical Reports Server (NTRS)

    Wilson, Jennifer G.; Cummins, Kenneth L.; Krider, E. Philip

    2009-01-01

    '1he NASA Kennedy Space Center (KSC) and Air Force Eastern Range (ER) use data from two cloud-to-ground (CG) lightning detection networks, the CGLSS and the NLDN, and a volumetric lightning mapping array, LDAR, to monitor and characterize lightning that is potentially hazardous to ground or launch operations. Data obtained from these systems during June-August 2006 have been examined to check the classification of small, negative CGLSS reports that have an estimated peak current, [I(sup p)] less than 7 kA, and to determine the smallest values of I(sup p), that are produced by first strokes, by subsequent strokes that create a new ground contact (NGC), and by subsequent strokes that remain in a pre-existing channel (PEC). The results show that within 20 km of the KSC-ER, 21% of the low-amplitude negative CGLSS reports were produced by first strokes, with a minimum I(sup p) of-2.9 kA; 31% were by NGCs, with a minimum I(sup p) of-2.0 kA; and 14% were by PECs, with a minimum I(sup p) of -2.2 kA. The remaining 34% were produced by cloud pulses or lightning events that we were not able to classify.

  13. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-09-15

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  14. Cloud and DNI nowcasting with MSG/SEVIRI for the optimized operation of concentrating solar power plants

    NASA Astrophysics Data System (ADS)

    Sirch, Tobias; Bugliaro, Luca; Zinner, Tobias; Möhrlein, Matthias; Vazquez-Navarro, Margarita

    2017-02-01

    A novel approach for the nowcasting of clouds and direct normal irradiance (DNI) based on the Spinning Enhanced Visible and Infrared Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellite is presented for a forecast horizon up to 120 min. The basis of the algorithm is an optical flow method to derive cloud motion vectors for all cloudy pixels. To facilitate forecasts over a relevant time period, a classification of clouds into objects and a weighted triangular interpolation of clear-sky regions are used. Low and high level clouds are forecasted separately because they show different velocities and motion directions. Additionally a distinction in advective and convective clouds together with an intensity correction for quickly thinning convective clouds is integrated. The DNI is calculated from the forecasted optical thickness of the low and high level clouds. In order to quantitatively assess the performance of the algorithm, a forecast validation against MSG/SEVIRI observations is performed for a period of 2 months. Error rates and Hanssen-Kuiper skill scores are derived for forecasted cloud masks. For a forecast of 5 min for most cloud situations more than 95 % of all pixels are predicted correctly cloudy or clear. This number decreases to 80-95 % for a forecast of 2 h depending on cloud type and vertical cloud level. Hanssen-Kuiper skill scores for cloud mask go down to 0.6-0.7 for a 2 h forecast. Compared to persistence an improvement of forecast horizon by a factor of 2 is reached for all forecasts up to 2 h. A comparison of forecasted optical thickness distributions and DNI against observations yields correlation coefficients larger than 0.9 for 15 min forecasts and around 0.65 for 2 h forecasts.

  15. Neural Network Cloud Classification Research

    DTIC Science & Technology

    1993-03-01

    analysis of the database made this study possible. We would also like to thank Don Chisolm and Rosemary Dyer for their enlightening discussions and...elements of the model correspond closely to neurophysiological data about the visual cortex. Efficient versions of the BCS and FCS have been

  16. Classification of Arctic, Mid-Latitude and Tropical Clouds in the Mixed-Phase Temperature Regime

    NASA Astrophysics Data System (ADS)

    Costa, Anja; Afchine, Armin; Luebke, Anna; Meyer, Jessica; Dorsey, James R.; Gallagher, Martin W.; Ehrlich, André; Wendisch, Manfred; Krämer, Martina

    2016-04-01

    The degree of glaciation and the sizes and habits of ice particles formed in mixed-phase clouds remain not fully understood. However, these properties define the mixed clouds' radiative impact on the Earth's climate and thus a correct representation of this cloud type in global climate models is of importance for an improved certainty of climate predictions. This study focuses on the occurrence and characteristics of two types of clouds in the mixed-phase temperature regime (238-275K): coexistence clouds (Coex), in which both liquid drops and ice crystals exist, and fully glaciated clouds that develop in the Wegener-Bergeron-Findeisen regime (WBF clouds). We present an extensive dataset obtained by the Cloud and Aerosol Particle Spectrometer NIXE-CAPS, covering Arctic, mid-latitude and tropical regions. In total, we spent 45.2 hours within clouds in the mixed-phase temperature regime during five field campaigns (Arctic: VERDI, 2012 and RACEPAC, 2014 - Northern Canada; mid-latitude: COALESC, 2011 - UK and ML-Cirrus, 2014 - central Europe; tropics: ACRIDICON, 2014 - Brazil). We show that WBF and Coex clouds can be identified via cloud particle size distributions. The classified datasets are used to analyse temperature dependences of both cloud types as well as range and frequencies of cloud particle concentrations and sizes. One result is that Coex clouds containing supercooled liquid drops are found down to temperatures of -40 deg C only in tropical mixed clouds, while in the Arctic and mid-latitudes no liquid drops are observed below about -20 deg C. In addition, we show that the cloud particles' aspherical fractions - derived from polarization signatures of particles with diameters between 20 and 50 micrometers - differ significantly between WBF and Coex clouds. In Coex clouds, the aspherical fraction of cloud particles is generally very low, but increases with decreasing temperature. In WBF clouds, where all cloud particles are ice, about 20-40% of the cloud particles are nevertheless classified as spherical for all temperatures, possibly indicating columnar ice crystals (see Järvinen et al, submitted to JAS 2016).

  17. Fusion of optical and SAR remote sensing images for tropical forests monitoring

    NASA Astrophysics Data System (ADS)

    Wang, C.; Yu, M.; Gao, Q.; Wang, X.

    2016-12-01

    Although tropical deforestation prevails in South America and Southeast Asia, reforestation appeared in some tropical regions due to economic changes. After the economic shift from agriculture to industry, the tropical island of Puerto Rico has experienced rapid reforestation as well as urban expansion since the late 1940s. Continued urban growth without the guide of sustainable planning might prevent further forest regrowth. Accurate and timely mapping of LULC is of great importance for evaluating the consequences of reforestation and urban expansion on the coupled human and nature systems. However, owning to persistent cloud cover in tropics, it remains a challenge to produce reliable LULC maps in fine spatial resolution. Here, we retrieved cloud-free Landsat surface reflectance composite data by removing clouds and shades from the USGS Landsat Surface Reflectance (SR) product for each scene using the CFmask and Fmask algorithms in Google Earth Engine. We then produced high accuracy land cover classification maps using SR optical data for the year of 2000 and fused optical and ALOS SAR data for 2010 and 2015, with an overall accuracy of 92.0%, 92.5%, and 91.6%, respectively. The classification result indicated that a successive forest gain of 6.52% and 1.03% occurred between the first (2000-2010) and second (2010-2015) study periods, respectively. We also conducted a comparative spatial analysis of patterns of deforestation and reforestation based on a series of forest cover zones (50 × 50 pixels, 150 ha). The annual rates of deforestation and reforestation against forest cover presented the similar trends during two periods: decreasing with the forest cover increasing. However, the annual net forest change rate was different in the zones with forest cover less than 30%, presenting significant gain (2.2-8.4% yr-1) for the first period and significant loss (2.3-6.4% yr-1) for the second period. It indicated that both deforestation and reforestation mostly occurred near the forest edges and low density secondary forests.

  18. Optically Thin Liquid Water Clouds: Their Importance and Our Challenge

    NASA Technical Reports Server (NTRS)

    Turner, D. D.; Vogelmann, A. M.; Austin, R. T.; Barnard, J. C.; Cady-Pereira, K.; Chiu, J. C.; Clough, S. A.; Flynn, C.; Khaiyer, M. M.; Liljegren, J.; hide

    2006-01-01

    Many of the clouds important to the Earth's energy balance, from the tropics to the Arctic, are optically thin and contain liquid water. Longwave and shortwave radiative fluxes are very sensitive to small perturbations of the cloud liquid water path (LWP) when the liquid water path is small (i.e., < g/sq m) and, thus, the radiative properties of these clouds must be well understood to capture them correctly in climate models. We review the importance of these thin clouds to the Earth's energy balance, and explain the difficulties in observing them. In particular, because these clouds are optically thin, potentially mixed-phase, and often (i.e., have large 3-D variability), it is challenging to retrieve their microphysical properties accurately. We describe a retrieval algorithm intercomparison that was conducted to evaluate the issues involved. The intercomparison included eighteen different algorithms to evaluate their retrieved LWP, optical depth, and effective radii. Surprisingly, evaluation of the simplest case, a single-layer overcast cloud, revealed that huge discrepancies exist among the various techniques, even among different algorithms that are in the same general classification. This suggests that, despite considerable advances that have occurred in the field, much more work must be done, and we discuss potential avenues for future work.

  19. The stellar content of 30 Doradus

    NASA Technical Reports Server (NTRS)

    Walborn, N. R.

    1984-01-01

    The components of the supergiant H II region Tarantula are surveyed, noting that 30 Doradus is really only the most active section of the Large Magellanic Cloud. The region contains at least 40 WR stars and numerous non-H II region late spectral type supergiants. Most of the stars are centrally located and presumably feed on the nebulosity. The closeness of the population will require fine spectroscopic scans of all the members to achieve accurate typing. Although the population is mixed, the ionizing radiation emitted by the region is consistent with its classification as part of the H II region. Finally, the brightest objects within Tarantula are suspected of being multiple systems.

  20. Cloud-Based NoSQL Open Database of Pulmonary Nodules for Computer-Aided Lung Cancer Diagnosis and Reproducible Research.

    PubMed

    Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini

    2016-12-01

    Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.

  1. 3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis

    NASA Astrophysics Data System (ADS)

    Herfort, Benjamin; Höfle, Bernhard; Klonner, Carolin

    2018-03-01

    In this paper, we propose a method to crowdsource the task of complex three-dimensional information extraction from 3D point clouds. We design web-based 3D micro tasks tailored to assess segmented LiDAR point clouds of urban trees and investigate the quality of the approach in an empirical user study. Our results for three different experiments with increasing complexity indicate that a single crowdsourcing task can be solved in a very short time of less than five seconds on average. Furthermore, the results of our empirical case study reveal that the accuracy, sensitivity and precision of 3D crowdsourcing are high for most information extraction problems. For our first experiment (binary classification with single answer) we obtain an accuracy of 91%, a sensitivity of 95% and a precision of 92%. For the more complex tasks of the second Experiment 2 (multiple answer classification) the accuracy ranges from 65% to 99% depending on the label class. Regarding the third experiment - the determination of the crown base height of individual trees - our study highlights that crowdsourcing can be a tool to obtain values with even higher accuracy in comparison to an automated computer-based approach. Finally, we found out that the accuracy of the crowdsourced results for all experiments is hardly influenced by characteristics of the input point cloud data and of the users. Importantly, the results' accuracy can be estimated using agreement among volunteers as an intrinsic indicator, which makes a broad application of 3D micro-mapping very promising.

  2. Land cover change of watersheds in Southern Guam from 1973 to 2001.

    PubMed

    Wen, Yuming; Khosrowpanah, Shahram; Heitz, Leroy

    2011-08-01

    Land cover change can be caused by human-induced activities and natural forces. Land cover change in watershed level has been a main concern for a long time in the world since watersheds play an important role in our life and environment. This paper is focused on how to apply Landsat Multi-Spectral Scanner (MSS) satellite image of 1973 and Landsat Thematic Mapper (TM) satellite image of 2001 to determine the land cover changes of coastal watersheds from 1973 to 2001. GIS and remote sensing are integrated to derive land cover information from Landsat satellite images of 1973 and 2001. The land cover classification is based on supervised classification method in remote sensing software ERDAS IMAGINE. Historical GIS data is used to replace the areas covered by clouds or shadows in the image of 1973 to improve classification accuracy. Then, temporal land cover is utilized to determine land cover change of coastal watersheds in southern Guam. The overall classification accuracies for Landsat MSS image of 1973 and Landsat TM image of 2001 are 82.74% and 90.42%, respectively. The overall classification of Landsat MSS image is particularly satisfactory considering its coarse spatial resolution and relatively bad data quality because of lots of clouds and shadows in the image. Watershed land cover change in southern Guam is affected greatly by anthropogenic activities. However, natural forces also affect land cover in space and time. Land cover information and change in watersheds can be applied for watershed management and planning, and environmental modeling and assessment. Based on spatio-temporal land cover information, the interaction behavior between human and environment may be evaluated. The findings in this research will be useful to similar research in other tropical islands.

  3. Signal and image processing algorithm performance in a virtual and elastic computing environment

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  4. Cloud Properties under Different Synoptic Circulations: Comparison of Radiosonde and Ground-Based Active Remote Sensing Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jinqiang; Li, Jun; Xia, Xiangao

    In this study, long-term (10 years) radiosonde-based cloud data are compared with the ground-based active remote sensing product under six prevailing large-scale synoptic patterns, i.e., cyclonic center (CC), weak pressure pattern (WP), the southeast bottom of cyclonic center (CB), cold front (CF), anticyclone edge (AE) and anticyclone center (AC) over the Southern Great Plains (SGP) site. The synoptic patterns are generated by applying the self-organizing map weather classification method to the daily National Centers for Environmental Protection mean sea level pressure records from the North American Regional Reanalysis. It reveals that the large-scale synoptic circulations can strongly influence the regionalmore » cloud formation, and thereby have impact on the consistency of cloud retrievals from the radiosonde and ground-based cloud product. The total cloud cover at the SGP site is characterized by the least in AC and the most in CF. The minimum and maximum differences between the two cloud methods are 10.3% for CC and 13.3% for WP. Compared to the synoptic patterns characterized by scattered cloudy and clear skies (AE and AC), the agreement of collocated cloud boundaries between the two cloud approaches tends to be better under the synoptic patterns dominated by overcast and cloudy skies (CC, WP and CB). The rainy and windy weather conditions in CF synoptic pattern influence the consistency of the two cloud retrieval methods associated with the limited capabilities inherent to the instruments. As a result, the cloud thickness distribution from the two cloud datasets compares favorably with each other in all synoptic patterns, with relative discrepancy of ≤0.3 km.« less

  5. Cloud Properties under Different Synoptic Circulations: Comparison of Radiosonde and Ground-Based Active Remote Sensing Measurements

    DOE PAGES

    Zhang, Jinqiang; Li, Jun; Xia, Xiangao; ...

    2016-11-28

    In this study, long-term (10 years) radiosonde-based cloud data are compared with the ground-based active remote sensing product under six prevailing large-scale synoptic patterns, i.e., cyclonic center (CC), weak pressure pattern (WP), the southeast bottom of cyclonic center (CB), cold front (CF), anticyclone edge (AE) and anticyclone center (AC) over the Southern Great Plains (SGP) site. The synoptic patterns are generated by applying the self-organizing map weather classification method to the daily National Centers for Environmental Protection mean sea level pressure records from the North American Regional Reanalysis. It reveals that the large-scale synoptic circulations can strongly influence the regionalmore » cloud formation, and thereby have impact on the consistency of cloud retrievals from the radiosonde and ground-based cloud product. The total cloud cover at the SGP site is characterized by the least in AC and the most in CF. The minimum and maximum differences between the two cloud methods are 10.3% for CC and 13.3% for WP. Compared to the synoptic patterns characterized by scattered cloudy and clear skies (AE and AC), the agreement of collocated cloud boundaries between the two cloud approaches tends to be better under the synoptic patterns dominated by overcast and cloudy skies (CC, WP and CB). The rainy and windy weather conditions in CF synoptic pattern influence the consistency of the two cloud retrieval methods associated with the limited capabilities inherent to the instruments. As a result, the cloud thickness distribution from the two cloud datasets compares favorably with each other in all synoptic patterns, with relative discrepancy of ≤0.3 km.« less

  6. To Which Extent can Aerosols Affect Alpine Mixed-Phase Clouds?

    NASA Astrophysics Data System (ADS)

    Henneberg, O.; Lohmann, U.

    2017-12-01

    Aerosol-cloud interactions constitute a high uncertainty in regional climate and changing weather patterns. Such uncertainties are due to the multiple processes that can be triggered by aerosol especially in mixed-phase clouds. Mixed-phase clouds most likely result in precipitation due to the formation of ice crystals, which can grow to precipitation size. Ice nucleating particles (INPs) determine how fast these clouds glaciate and form precipitation. The potential for INP to transfer supercooled liquid clouds to precipitating clouds depends on the available humidity and supercooled liquid. Those conditions are determined by dynamics. Moderately high updraft velocities result in persistent mixed-phase clouds in the Swiss Alps [1], which provide an ideal testbed to investigate the effect of aerosol on precipitation in mixed-phase clouds. To address the effect of aerosols in orographic winter clouds under different dynamic conditions, we run a number of real case ensembles with the regional climate model COSMO on a horizontal resolution of 1.1 km. Simulations with different INP concentrations within the range observed at the GAW research station Jungfraujoch in the Swiss Alps are conducted and repeated within the ensemble. Microphysical processes are described with a two-moment scheme. Enhanced INP concentrations enhance the precipitation rate of a single precipitation event up to 20%. Other precipitation events of similar strength are less affected by the INP concentration. The effect of CCNs is negligible for precipitation from orographic winter clouds in our case study. There is evidence for INP to change precipitation rate and location more effectively in stronger dynamic regimes due to the enhanced potential to transfer supercooled liquid to ice. The classification of the ensemble members according to their dynamics will quantify the interaction of aerosol effects and dynamics. Reference [1] Lohmann et al, 2016: Persistence of orographic mixed-phase clouds, GRL

  7. Continental land cover classification using meteorological satellite data

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.; Townshend, J. R. G.; Goff, T. E.

    1983-01-01

    The use of the National Oceanic and Atmospheric Administration's advanced very high resolution radiometer satellite data for classifying land cover and monitoring of vegetation dynamics over an extremely large area is demonstrated for the continent of Africa. Data from 17 imaging periods of 21 consecutive days each were composited by a technique sensitive to the in situ green-leaf biomass to provide cloud-free imagery for the whole continent. Virtually cloud-free images were obtainable even for equatorial areas. Seasonal variation in the density and extent of green leaf vegetation corresponded to the patterns of rainfall associated with the inter-tropical convergence zone. Regional variations, such as the 1982 drought in east Africa, were also observed. Integration of the weekly satellite data with respect to time produced a remotely sensed assessment of biological activity based upon density and duration of green-leaf biomass. Two of the 21-day composited data sets were used to produce a general land cover classification. The resultant land cover distributions correspond well to those of existing maps.

  8. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  9. Improving Scene Classifications with Combined Active/Passive Measurements

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Rodier, S.; Vaughan, M.; McGill, M.

    The uncertainties in cloud and aerosol physical properties derived from passive instruments such as MODIS are not insignificant And the uncertainty increases when the optical depths decrease Lidar observations do much better for the thin clouds and aerosols Unfortunately space-based lidar measurements such as the one onboard CALIPSO satellites are limited to nadir view only and thus have limited spatial coverage To produce climatologically meaningful thin cloud and aerosol data products it is necessary to combine the spatial coverage of MODIS with the highly sensitive CALIPSO lidar measurements Can we improving the quality of cloud and aerosol remote sensing data products by extending the knowledge about thin clouds and aerosols learned from CALIPSO-type of lidar measurements to a larger portion of the off-nadir MODIS-like multi-spectral pixels To answer the question we studied the collocated Cloud Physics Lidar CPL with Modis-Airborne-Simulation MAS observations and established an effective data fusion technique that will be applied in the combined CALIPSO MODIS cloud aerosol product algorithms This technique performs k-mean and Kohonen self-organized map cluster analysis on the entire swath of MAS data as well as on the combined CPL MAS data at the nadir track Interestingly the clusters generated from the two approaches are almost identical It indicates that the MAS multi-spectral data may have already captured most of the cloud and aerosol scene types such as cloud ice water phase multi-layer information aerosols

  10. Segmentation and classification of road markings using MLS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2017-01-01

    Traffic signs are one of the most important safety elements in a road network. Particularly, road markings provide information about the limits and direction of each road lane, or warn the drivers about potential danger. The optimal condition of road markings contributes to a better road safety. Mobile Laser Scanning technology can be used for infrastructure inspection and specifically for traffic sign detection and inventory. This paper presents a methodology for the detection and semantic characterization of the most common road markings, namely pedestrian crossings and arrows. The 3D point cloud data acquired by a LYNX Mobile Mapper system is filtered in order to isolate reflective points in the road, and each single element is hierarchically classified using Neural Networks. State of the art results are obtained for the extraction and classification of the markings, with F-scores of 94% and 96% respectively. Finally, data from classified markings are exported to a GIS layer and maintenance criteria based on the aforementioned data are proposed.

  11. A Framework for Real-Time Collection, Analysis, and Classification of Ubiquitous Infrasound Data

    NASA Astrophysics Data System (ADS)

    Christe, A.; Garces, M. A.; Magana-Zook, S. A.; Schnurr, J. M.

    2015-12-01

    Traditional infrasound arrays are generally expensive to install and maintain. There are ~10^3 infrasound channels on Earth today. The amount of data currently provided by legacy architectures can be processed on a modest server. However, the growing availability of low-cost, ubiquitous, and dense infrasonic sensor networks presents a substantial increase in the volume, velocity, and variety of data flow. Initial data from a prototype ubiquitous global infrasound network is already pushing the boundaries of traditional research server and communication systems, in particular when serving data products over heterogeneous, international network topologies. We present a scalable, cloud-based approach for capturing and analyzing large amounts of dense infrasonic data (>10^6 channels). We utilize Akka actors with WebSockets to maintain data connections with infrasound sensors. Apache Spark provides streaming, batch, machine learning, and graph processing libraries which will permit signature classification, cross-correlation, and other analytics in near real time. This new framework and approach provide significant advantages in scalability and cost.

  12. Midlatitude cirrus classification at Rome Tor Vergata through a multichannel Raman-Mie-Rayleigh lidar

    NASA Astrophysics Data System (ADS)

    Dionisi, D.; Keckhut, P.; Liberti, G. L.; Cardillo, F.; Congeduti, F.

    2013-12-01

    A methodology to identify and characterize cirrus clouds has been developed and applied to the multichannel-multiwavelength Rayleigh-Mie-Raman (RMR) lidar in Rome Tor Vergata (RTV). A set of 167 cirrus cases, defined on the basis of quasi-stationary temporal period conditions, has been selected in a data set consisting of about 500 h of nighttime lidar sessions acquired between February 2007 and April 2010. The derived lidar parameters (effective height, geometrical and optical thickness and mean back-scattering ratio) and the cirrus mid-height temperature (estimated from the radiosonde data of Pratica di Mare, WMO, World Meteorological Organization, site no. 16245) of this sample have been analyzed by the means of a clustering multivariate analysis. This approach identified four cirrus classes above the RTV site: two thin cirrus clusters in mid- and upper troposphere and two thick cirrus clusters in mid-upper troposphere. These results, which are very similar to those derived through the same approach at the lidar site of the Observatoire de Haute-Provence (OHP), allows characterization of cirrus clouds over the RTV site and attests to the robustness of such classification. To acquire some indications about the cirrus generation methods for the different classes, analyses of the extinction-to-backscatter ratio (lidar ratio, LReff, in terms of frequency distribution functions and dependencies on the mid-height cirrus temperature, have been performed. A preliminary study relating some meteorological parameters (e.g., relative humidity, wind components) to cirrus clusters has also been conducted. The RTV cirrus results, recomputed through the cirrus classification by Sassen and Cho (1992), show good agreement with other midlatitude lidar cirrus observations for the relative occurrence of subvisible (SVC), thin and opaque cirrus classes (10%, 49% and 41%, respectively). The overall mean value of cirrus optical depth is 0.37 ± 0.18, while most retrieved LReff values range between 10-60 sr, and the estimated mean value is 31 ± 15 sr, similar to LR values of lower latitude cirrus measurements. The obtained results are consistent with previous studies conducted with different systems and confirm that cirrus classification based on a statistical approach seems to be a good tool both to validate the height-resolved cirrus fields calculated by models and to investigate the key processes governing cirrus formation and evolution. However, the lidar ratio and optical depth analyses are affected by some uncertainties (e.g., lidar error noise, multiple scattering effects, supercooled water clouds) that reduce the confidence of the results. Future studies are needed to improve the characterization of the cirrus optical properties and, thus, the determination of their radiative impact.

  13. GEWEX Cloud Systems Study (GCSS)

    NASA Technical Reports Server (NTRS)

    Moncrieff, Mitch

    1993-01-01

    The Global Energy and Water Cycle Experiment (GEWEX) Cloud Systems Study (GCSS) program seeks to improve the physical understanding of sub-grid scale cloud processes and their representation in parameterization schemes. By improving the description and understanding of key cloud system processes, GCSS aims to develop the necessary parameterizations in climate and numerical weather prediction (NWP) models. GCSS will address these issues mainly through the development and use of cloud-resolving or cumulus ensemble models to generate realizations of a set of archetypal cloud systems. The focus of GCSS is on mesoscale cloud systems, including precipitating convectively-driven cloud systems like MCS's and boundary layer clouds, rather than individual clouds, and on their large-scale effects. Some of the key scientific issues confronting GCSS that particularly relate to research activities in the central U.S. are presented.

  14. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  15. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Astronomy Data Centre, Canadian

    2014-01-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  16. Best practices for implementing, testing and using a cloud-based communication system in a disaster situation.

    PubMed

    Makowski, Dale

    2016-01-01

    This paper sets out the basics for approaching the selection and implementation of a cloud-based communication system to support a business continuity programme, including: • consideration for how a cloud-based communication system can enhance a business continuity programme; • descriptions of some of the more popular features of a cloud-based communication system; • options to evaluate when selecting a cloud-based communication system; • considerations for how to design a system to be most effective for an organisation; • best practices for how to conduct the initial load of data to a cloud-based communication system; • best practices for how to conduct an initial validation of the data loaded to a cloud-based communication system; • considerations for how to keep contact information in the cloud-based communication system current and accurate; • best practices for conducting ongoing system testing; • considerations for how to conduct user training; • review of other potential uses of a cloud-based communication system; and • review of other tools and features many cloud-based communication systems may offer.

  17. Satellite Data Analysis of Impact of Anthropogenic Air Pollution on Ice Clouds

    NASA Astrophysics Data System (ADS)

    Gu, Y.; Liou, K. N.; Zhao, B.; Jiang, J. H.; Su, H.

    2017-12-01

    Despite numerous studies about the impact of aerosols on ice clouds, the role of anthropogenic aerosols in ice processes, especially over pollution regions, remains unclear and controversial, and has not been considered in a regional model. The objective of this study is to improve our understanding of the ice process associated with anthropogenic aerosols, and provide a comprehensive assessment of the contribution of anthropogenic aerosols to ice nucleation, ice cloud properties, and the consequent regional radiative forcing. As the first attempt, we evaluate the effects of different aerosol types (mineral dust, air pollution, polluted dust, and smoke) on ice cloud micro- and macro-physical properties using satellite data. We identify cases with collocated CloudSat, CALIPSO, and Aqua observations of vertically resolved aerosol and cloud properties, and process these observations into the same spatial resolution. The CALIPSO's aerosol classification algorithm determines aerosol layers as one of six defined aerosol types by taking into account the lidar depolarization ratio, integrated attenuated backscattering, surface type, and layer elevation. We categorize the cases identified above according to aerosol types, collect relevant aerosol and ice cloud variables, and determine the correlation between column/layer AOD and ice cloud properties for each aerosol type. Specifically, we investigate the correlation between aerosol loading (indicated by the column AOD and layer AOD) and ice cloud microphysical properties (ice water content, ice crystal number concentration, and ice crystal effective radius) and macro-physical properties (ice water path, ice cloud fraction, cloud top temperature, and cloud thickness). By comparing the responses of ice cloud properties to aerosol loadings for different aerosol types, we infer the role of different aerosol types in ice nucleation and the evolution of ice clouds. Our preliminary study shows that changes in the ice crystal effective radius with respect to AOD over Eastern Asia for the aerosol types of polluted continental and mineral dust look similar, implying that both air pollution and mineral dust could affect the microphysical properties of ice clouds.

  18. A method for quantifying cloud immersion in a tropical mountain forest using time-lapse photography

    USGS Publications Warehouse

    Bassiouni, Maoya; Scholl, Martha A.; Torres-Sanchez, Angel J.; Murphy, Sheila F.

    2017-01-01

    Quantifying the frequency, duration, and elevation range of fog or cloud immersion is essential to estimate cloud water deposition in water budgets and to understand the ecohydrology of cloud forests. The goal of this study was to develop a low-cost and high spatial-coverage method to detect occurrence of cloud immersion within a mountain cloud forest by using time-lapse photography. Trail cameras and temperature/relative humidity sensors were deployed at five sites covering the elevation range from the assumed lifting condensation level to the mountain peaks in the Luquillo Mountains of Puerto Rico. Cloud-sensitive image characteristics (contrast, the coefficient of variation and the entropy of pixel luminance, and image colorfulness) were used with a k-means clustering approach to accurately detect cloud-immersed conditions in a time series of images from March 2014 to May 2016. Images provided hydrologically meaningful cloud-immersion information while temperature-relative humidity data were used to refine the image analysis using dew point information and provided temperature gradients along the elevation transect. Validation of the image processing method with human-judgment based classification generally indicated greater than 90% accuracy. Cloud-immersion frequency averaged 80% at sites above 900 m during nighttime hours and 49% during daytime hours, and was consistent with diurnal patterns of cloud immersion measured in a previous study. Results for the 617 m site demonstrated that cloud immersion in the Luquillo Mountains rarely occurs at the previously-reported cloud base elevation of about 600 m (11% during nighttime hours and 5% during daytime hours). The framework presented in this paper will be used to monitor at a low cost and high spatial resolution the long-term variability of cloud-immersion patterns in the Luquillo Mountains, and can be applied to ecohydrology research at other cloud-forest sites or in coastal ecosystems with advective sea fog.

  19. Recommended Values of Meteorological Factors to Be Considered in the Design of Aircraft Ice-Prevention Equipment

    NASA Technical Reports Server (NTRS)

    Jones, Alun R; Lewis, William

    1949-01-01

    Meteorological conditions conducive to aircraft icing are arranged in four classifications: three are associated with cloud structure and the fourth with freezing rain. The range of possible meteorological factors for each classification is discussed and specific values recommended for consideration in the design of ice-prevention equipment for aircraft are selected and tabulated. The values selected are based upon a study of the available observational data and theoretical considerations where observations are lacking. Recommendations for future research in the field are presented.

  20. Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results

    NASA Astrophysics Data System (ADS)

    Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc

    2013-12-01

    Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.

  1. A Survey on Personal Data Cloud

    PubMed Central

    Wang, Jiaqiu; Wang, Zhongjie

    2014-01-01

    Personal data represent the e-history of a person and are of great significance to the person, but they are essentially produced and governed by various distributed services and there lacks a global and centralized view. In recent years, researchers pay attention to Personal Data Cloud (PDC) which aggregates the heterogeneous personal data scattered in different clouds into one cloud, so that a person could effectively store, acquire, and share their data. This paper makes a short survey on PDC research by summarizing related papers published in recent years. The concept, classification, and significance of personal data are elaborately introduced and then the semantics correlation and semantics representation of personal data are discussed. A multilayer reference architecture of PDC, including its core components and a real-world operational scenario showing how the reference architecture works, is introduced in detail. Existing commercial PDC products/prototypes are listed and compared from several perspectives. Five open issues to improve the shortcomings of current PDC research are put forward. PMID:25165753

  2. Synthetic Aperture Radar (SAR)-based paddy rice monitoring system: Development and application in key rice producing areas in Tropical Asia

    NASA Astrophysics Data System (ADS)

    Setiyono, T. D.; Holecz, F.; Khan, N. I.; Barbieri, M.; Quicho, E.; Collivignarelli, F.; Maunahan, A.; Gatti, L.; Romuga, G. C.

    2017-01-01

    Reliable and regular rice information is essential part of many countries’ national accounting process but the existing system may not be sufficient to meet the information demand in the context of food security and policy. Synthetic Aperture Radar (SAR) imagery is highly suitable for detecting lowland paddy rice, especially in tropical region where pervasive cloud cover in the rainy seasons limits the use of optical imagery. This study uses multi-temporal X-band and C-band SAR imagery, automated image processing, rule-based classification and field observations to classify rice in multiple locations across Tropical Asia and assimilate the information into ORYZA Crop Growth Simulation model (CGSM) to generate high resolution yield maps. The resulting cultivated rice area maps had classification accuracies above 85% and yield estimates were within 81-93% agreement against district level reported yields. The study sites capture much of the diversity in water management, crop establishment and rice maturity durations and the study demonstrates the feasibility of rice detection, yield monitoring, and damage assessment in case of climate disaster at national and supra-national scales using multi-temporal SAR imagery combined with CGSM and automated methods.

  3. A Climatology of Polar Stratospheric Cloud Types by MIPAS-Envisat

    NASA Astrophysics Data System (ADS)

    Spang, Reinhold; Hoffmann, Lars; Griessbach, Sabine; Orr, Andrew; Höpfner, Michael; Müller, Rolf

    2015-04-01

    For Chemistry Climate Models (CCM) it is still a challenging task to properly represent the evolution of the polar vortices over the entire winter season. The models usually do not include comprehensive microphysical modules to evolve the formation of different types of polar stratospheric clouds (PSC) over the winter. Consequently, predictions on the development and recovery of the future ozone hole have relatively large uncertainties. A climatological record of hemispheric measurement of PSC types could help to better validate and improve the PSC schemes in CCMs. The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) instrument onboard the ESA Envisat satellite operated from July 2002 to April 2012. The infra-red limb emission measurements compile a unique dataset of day and night measurements of polar stratospheric clouds up to the poles. From the spectral measurements in the 4.15-14.6 microns range it is possible to select a number of atmospheric window regions and spectral signatures to classify PSC cloud types like nitric acid hydrates, sulfuric ternary solution droplets, and ice particles. The cloud detection sensitivity is similar to space borne lidars, but MIPAS adds complementary information due to its different measurement technique (limb instead of nadir) and wavelength region. Here we will describe a new classification method for PSCs based on the combination of multiple brightness temperature differences (BTD) and colour ratios. Probability density functions (PDF) of the MIPAS measurements in conjunction with a database of radiative transfer model calculations of realistic PSC particle size distributions enable the definition of regions attributed to specific or mixed types clouds. Applying a naive bias classifier for independent criteria to all defined classes in four 2D PDF distributions, it is possible to assign the most likely PSC type to any measured cloud spectrum. Statistical Monte Carlo test have been applied to quantify uncertainties and the sensitivity to a priori information of the approach. The processing of the complete MIPAS data set of almost 10 years of PSC observations with a first version of the new classification approach is completed. Results for various northern and southern hemisphere winters will be presented. The temporal evolution of the PSC types with respect to the temporal development of the meteorological conditions of the polar vortex as well as comparison with space and ground based lidar measurements will be investigated.

  4. Spectral Lidar Analysis and Terrain Classification in a Semi-Urban Environment

    DTIC Science & Technology

    2017-03-01

    MONITORING AGENCY REPORT NUMBER 11 . SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official policy... Apollo 15, 16, 17 Laser Altimeters ...............................................6 3. Clementine...35 11 . Cloud-Aerosol LiDAR and Infrared Pathfinder Satellite Observations

  5. A Method to Estimate Sunshine Duration Using Cloud Classification Data from a Geostationary Meteorological Satellite (FY-2D) over the Heihe River Basin.

    PubMed

    Wu, Bingfang; Liu, Shufu; Zhu, Weiwei; Yu, Mingzhao; Yan, Nana; Xing, Qiang

    2016-11-04

    Sunshine duration is an important variable that is widely used in atmospheric energy balance studies, analysis of the thermal loadings on buildings, climate research, and the evaluation of agricultural resources. In most cases, it is calculated using an interpolation method based on regional-scale meteorological data from field stations. Accurate values in the field are difficult to obtain without ground measurements. In this paper, a satellite-based method to estimate sunshine duration is introduced and applied over the Heihe River Basin. This method is based on hourly cloud classification product data from the FY-2D geostationary meteorological satellite (FY-2D). A new index-FY-2D cloud type sunshine factor-is proposed, and the Shuffled Complex Evolution Algorithm (SCE-UA) was used to calibrate sunshine factors from different coverage types based on ground measurement data from the Heihe River Basin in 2007. The estimated sunshine duration from the proposed new algorithm was validated with ground observation data for 12 months in 2008, and the spatial distribution was compared with the results of an interpolation method over the Heihe River Basin. The study demonstrates that geostationary satellite data can be used to successfully estimate sunshine duration. Potential applications include climate research, energy balance studies, and global estimations of evapotranspiration.

  6. A Method to Estimate Sunshine Duration Using Cloud Classification Data from a Geostationary Meteorological Satellite (FY-2D) over the Heihe River Basin

    PubMed Central

    Wu, Bingfang; Liu, Shufu; Zhu, Weiwei; Yu, Mingzhao; Yan, Nana; Xing, Qiang

    2016-01-01

    Sunshine duration is an important variable that is widely used in atmospheric energy balance studies, analysis of the thermal loadings on buildings, climate research, and the evaluation of agricultural resources. In most cases, it is calculated using an interpolation method based on regional-scale meteorological data from field stations. Accurate values in the field are difficult to obtain without ground measurements. In this paper, a satellite-based method to estimate sunshine duration is introduced and applied over the Heihe River Basin. This method is based on hourly cloud classification product data from the FY-2D geostationary meteorological satellite (FY-2D). A new index—FY-2D cloud type sunshine factor—is proposed, and the Shuffled Complex Evolution Algorithm (SCE-UA) was used to calibrate sunshine factors from different coverage types based on ground measurement data from the Heihe River Basin in 2007. The estimated sunshine duration from the proposed new algorithm was validated with ground observation data for 12 months in 2008, and the spatial distribution was compared with the results of an interpolation method over the Heihe River Basin. The study demonstrates that geostationary satellite data can be used to successfully estimate sunshine duration. Potential applications include climate research, energy balance studies, and global estimations of evapotranspiration. PMID:27827935

  7. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. As a result, this is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  8. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  9. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  10. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature.

    PubMed

    Henderson, Jette; Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-05-04

    Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. ©Jette Henderson, Junyuan Ke, Joyce C Ho, Joydeep Ghosh, Byron C Wallace. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.05.2018.

  11. Surface spectral emissivity derived from MODIS data

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Sun-Mack, Sunny; Minnis, Patrick; Smith, William L.; Young, David F.

    2003-04-01

    Surface emissivity is essential for many remote sensing applications including the retrieval of the surface skin temperature from satellite-based infrared measurements, determining thresholds for cloud detection and for estimating the emission of longwave radiation from the surface, an important component of the energy budget of the surface-atmosphere interface. In this paper, data from the Terra MODIS (MODerate-resolution Imaging Spectroradiometer) taken at 3.7, 8.5, 10.8, 12.0 micron are used to simultaneously derive the skin temperature and the surface emissivities at the same wavelengths. The methodology uses separate measurements of the clear-sky temperatures that are determined by the CERES (Clouds and Earth's Radiant Energy System) scene classification in each channel during the daytime and at night. The relationships between the various channels at night are used during the day when solar reflectance affects the 3.7 micron data. A set of simultaneous equations is then solved to derive the emissivities. Global results are derived from MODIS. Numerical weather analyses are used to provide soundings for correcting the observed radiances for atmospheric absorption. These results are verified and will be available for remote sensing applications.

  12. An Online 3D Database System for Endangered Architectural and Archaeological Heritage in the South-Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Abate, D.; Avgousti, A.; Faka, M.; Hermon, S.; Bakirtzis, N.; Christofi, P.

    2017-10-01

    This study compares performance of aerial image based point clouds (IPCs) and light detection and ranging (LiDAR) based point clouds in detection of thinnings and clear cuts in forests. IPCs are an appealing method to update forest resource data, because of their accuracy in forest height estimation and cost-efficiency of aerial image acquisition. We predicted forest changes over a period of three years by creating difference layers that displayed the difference in height or volume between the initial and subsequent time points. Both IPCs and LiDAR data were used in this process. The IPCs were constructed with the Semi-Global Matching (SGM) algorithm. Difference layers were constructed by calculating differences in fitted height or volume models or in canopy height models (CHMs) from both time points. The LiDAR-derived digital terrain model (DTM) was used to scale heights to above ground level. The study area was classified in logistic regression into the categories ClearCut, Thinning or NoChange with the values from the difference layers. We compared the predicted changes with the true changes verified in the field, and obtained at best a classification accuracy for clear cuts 93.1 % with IPCs and 91.7 % with LiDAR data. However, a classification accuracy for thinnings was only 8.0 % with IPCs. With LiDAR data 41.4 % of thinnings were detected. In conclusion, the LiDAR data proved to be more accurate method to predict the minor changes in forests than IPCs, but both methods are useful in detection of major changes.

  13. Full-polarization radar remote sensing and data mining for tropical crops mapping: a successful SVM-based classification model

    NASA Astrophysics Data System (ADS)

    Denize, J.; Corgne, S.; Todoroff, P.; LE Mezo, L.

    2015-12-01

    In Reunion, a tropical island of 2,512 km², 700 km east of Madagascar in the Indian Ocean, constrained by a rugged relief, agricultural sectors are competing in highly fragmented agricultural land constituted by heterogeneous farming systems from corporate to small-scale farming. Policymakers, planners and institutions are in dire need of reliable and updated land use references. Actually conventional land use mapping methods are inefficient under the tropic with frequent cloud cover and loosely synchronous vegetative cycles of the crops due to a constant temperature. This study aims to provide an appropriate method for the identification and mapping of tropical crops by remote sensing. For this purpose, we assess the potential of polarimetric SAR imagery associated with associated with machine learning algorithms. The method has been developed and tested on a study area of 25*25 km thanks to 6 RADARSAT-2 images in 2014 in full-polarization. A set of radar indicators (backscatter coefficient, bands ratios, indices, polarimetric decompositions (Freeman-Durden, Van zyl, Yamaguchi, Cloude and Pottier, Krogager), texture, etc.) was calculated from the coherency matrix. A random forest procedure allowed the selection of the most important variables on each images to reduce the dimension of the dataset and the processing time. Support Vector Machines (SVM), allowed the classification of these indicators based on a learning database created from field observations in 2013. The method shows an overall accuracy of 88% with a Kappa index of 0.82 for the identification of four major crops.

  14. Study of sensor spectral responses and data processing algorithms and architectures for onboard feature identification

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.

  15. Reflections on current and future applications of multiangle imaging to aerosol and cloud remote sensing

    NASA Astrophysics Data System (ADS)

    Diner, David

    2010-05-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument has been collecting global Earth data from NASA's Terra satellite since February 2000. With its 9 along-track view angles, 4 spectral bands, intrinsic spatial resolution of 275 m, and stable radiometric and geometric calibration, no instrument that combines MISR's attributes has previously flown in space, nor is there is a similar capability currently available on any other satellite platform. Multiangle imaging offers several tools for remote sensing of aerosol and cloud properties, including bidirectional reflectance and scattering measurements, stereoscopic pattern matching, time lapse sequencing, and potentially, optical tomography. Current data products from MISR employ several of these techniques. Observations of the intensity of scattered light as a function of view angle and wavelength provide accurate measures of aerosol optical depths (AOD) over land, including bright desert and urban source regions. Partitioning of AOD according to retrieved particle classification and incorporation of height information improves the relationship between AOD and surface PM2.5 (fine particulate matter, a regulated air pollutant), constituting an important step toward a satellite-based particulate pollution monitoring system. Stereoscopic cloud-top heights provide a unique metric for detecting interannual variability of clouds and exceptionally high quality and sensitivity for detection and height retrieval for low-level clouds. Using the several-minute time interval between camera views, MISR has enabled a pole-to-pole, height-resolved atmospheric wind measurement system. Stereo imagery also makes possible global measurement of the injection heights and advection speeds of smoke plumes, volcanic plumes, and dust clouds, for which a large database is now available. To build upon what has been learned during the first decade of MISR observations, we are evaluating algorithm updates that not only refine retrieval accuracies but also include enhancements (e.g., finer spatial resolution) that would have been computationally prohibitive just ten years ago. In addition, we are developing technological building blocks for future sensors that enable broader spectral coverage, wider swath, and incorporation of high-accuracy polarimetric imaging. Prototype cameras incorporating photoelastic modulators have been constructed. To fully capitalize on the rich information content of the current and next-generation of multiangle imagers, several algorithmic paradigms currently employed need to be re-examined, e.g., the use of aerosol look-up tables, neglect of 3-D effects, and binary partitioning of the atmosphere into "cloudy" or "clear" designations. Examples of progress in algorithm and technology developments geared toward advanced application of multiangle imaging to remote sensing of aerosols and clouds will be presented.

  16. Life Cycle of Midlatitude Deep Convective Systems in a Lagrangian Framework

    NASA Technical Reports Server (NTRS)

    Feng, Zhe; Dong, Xiquan; Xie, Baike; McFarlane, Sally A.; Kennedy, Aaron; Lin, Bing; Minnis, Patrick

    2012-01-01

    Deep Convective Systems (DCSs) consist of intense convective cores (CC), large stratiform rain (SR) regions, and extensive non-precipitating anvil clouds (AC). This study focuses on the evolution of these three components and the factors that affect convective AC production. An automated satellite tracking method is used in conjunction with a recently developed multi-sensor hybrid classification to analyze the evolution of DCS structure in a Lagrangian framework over the central United States. Composite analysis from 4221 tracked DCSs during two warm seasons (May-August, 2010-2011) shows that maximum system size correlates with lifetime, and longer-lived DCSs have more extensive SR and AC. Maximum SR and AC area lag behind peak convective intensity and the lag increases linearly from approximately 1-hour for short-lived systems to more than 3-hours for long-lived ones. The increased lag, which depends on the convective environment, suggests that changes in the overall diabatic heating structure associated with the transition from CC to SR and AC could prolong the system lifetime by sustaining stratiform cloud development. Longer-lasting systems are associated with up to 60% higher mid-tropospheric relative humidity and up to 40% stronger middle to upper tropospheric wind shear. Regression analysis shows that the areal coverage of thick AC is strongly correlated with the size of CC, updraft strength, and SR area. Ambient upper tropospheric wind speed and wind shear also play an important role for convective AC production where for systems with large AC (radius greater than 120-km) they are 24% and 20% higher, respectively, than those with small AC (radius=20 km).

  17. Cloud Computing: A model Construct of Real-Time Monitoring for Big Dataset Analytics Using Apache Spark

    NASA Astrophysics Data System (ADS)

    Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer

    2018-01-01

    The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.

  18. Cloud Liquid Water Path Comparisons from Passive Microwave and Solar Reflectance Satellite Measurements: Assessment of Sub-Field-of-View Cloud Effects in Microwave Retrievals

    NASA Technical Reports Server (NTRS)

    Greenwald, Thomas J.; Christopher, Sundar A.; Chou, Joyce

    1997-01-01

    Satellite observations of the cloud liquid water path (LWP) are compared from special sensor microwave imager (SSM/I) measurements and GOES 8 imager solar reflectance (SR) measurements to ascertain the impact of sub-field-of-view (FOV) cloud effects on SSM/I 37 GHz retrievals. The SR retrievals also incorporate estimates of the cloud droplet effective radius derived from the GOES 8 3.9-micron channel. The comparisons consist of simultaneous collocated and full-resolution measurements and are limited to nonprecipitating marine stratocumulus in the eastern Pacific for two days in October 1995. The retrievals from these independent methods are consistent for overcast SSM/I FOVS, with RMS differences as low as 0.030 kg/sq m, although biases exist for clouds with more open spatial structure, where the RMS differences increase to 0.039 kg/sq m. For broken cloudiness within the SSM/I FOV the average beam-filling error (BFE) in the microwave retrievals is found to be about 22% (average cloud amount of 73%). This systematic error is comparable with the average random errors in the microwave retrievals. However, even larger BFEs can be expected for individual FOVs and for regions with less cloudiness. By scaling the microwave retrievals by the cloud amount within the FOV, the systematic BFE can be significantly reduced but with increased RMS differences of O.046-0.058 kg/sq m when compared to the SR retrievals. The beam-filling effects reported here are significant and are expected to impact directly upon studies that use instantaneous SSM/I measurements of cloud LWP, such as cloud classification studies and validation studies involving surface-based or in situ data.

  19. The Weather Forecast Using Data Mining Research Based on Cloud Computing.

    NASA Astrophysics Data System (ADS)

    Wang, ZhanJie; Mazharul Mujib, A. B. M.

    2017-10-01

    Weather forecasting has been an important application in meteorology and one of the most scientifically and technologically challenging problem around the world. In my study, we have analyzed the use of data mining techniques in forecasting weather. This paper proposes a modern method to develop a service oriented architecture for the weather information systems which forecast weather using these data mining techniques. This can be carried out by using Artificial Neural Network and Decision tree Algorithms and meteorological data collected in Specific time. Algorithm has presented the best results to generate classification rules for the mean weather variables. The results showed that these data mining techniques can be enough for weather forecasting.

  20. A Framework and Improvements of the Korea Cloud Services Certification System.

    PubMed

    Jeon, Hangoo; Seo, Kwang-Kyu

    2015-01-01

    Cloud computing service is an evolving paradigm that affects a large part of the ICT industry and provides new opportunities for ICT service providers such as the deployment of new business models and the realization of economies of scale by increasing efficiency of resource utilization. However, despite benefits of cloud services, there are some obstacles to adopt such as lack of assessing and comparing the service quality of cloud services regarding availability, security, and reliability. In order to adopt the successful cloud service and activate it, it is necessary to establish the cloud service certification system to ensure service quality and performance of cloud services. This paper proposes a framework and improvements of the Korea certification system of cloud service. In order to develop it, the critical issues related to service quality, performance, and certification of cloud service are identified and the systematic framework for the certification system of cloud services and service provider domains are developed. Improvements of the developed Korea certification system of cloud services are also proposed.

  1. A Framework and Improvements of the Korea Cloud Services Certification System

    PubMed Central

    Jeon, Hangoo

    2015-01-01

    Cloud computing service is an evolving paradigm that affects a large part of the ICT industry and provides new opportunities for ICT service providers such as the deployment of new business models and the realization of economies of scale by increasing efficiency of resource utilization. However, despite benefits of cloud services, there are some obstacles to adopt such as lack of assessing and comparing the service quality of cloud services regarding availability, security, and reliability. In order to adopt the successful cloud service and activate it, it is necessary to establish the cloud service certification system to ensure service quality and performance of cloud services. This paper proposes a framework and improvements of the Korea certification system of cloud service. In order to develop it, the critical issues related to service quality, performance, and certification of cloud service are identified and the systematic framework for the certification system of cloud services and service provider domains are developed. Improvements of the developed Korea certification system of cloud services are also proposed. PMID:26125049

  2. Cloud cover classification through simultaneous ground-based measurements of solar and infrared radiation

    NASA Astrophysics Data System (ADS)

    Orsini, Antonio; Tomasi, Claudio; Calzolari, Francescopiero; Nardino, Marianna; Cacciari, Alessandra; Georgiadis, Teodoro

    2002-04-01

    Simultaneous measurements of downwelling short-wave solar irradiance and incoming total radiation flux were performed at the Reeves Nevè glacier station (1200 m MSL) in Antarctica on 41 days from late November 1994 to early January 1995, employing the upward sensors of an albedometer and a pyrradiometer. The downwelling short-wave radiation measurements were analysed following the Duchon and O'Malley [J. Appl. Meteorol. 38 (1999) 132] procedure for classifying clouds, using the 50-min running mean values of standard deviation and the ratio of scaled observed to scaled clear-sky irradiance. Comparing these measurements with the Duchon and O'Malley rectangular boundaries and the local human observations of clouds collected on 17 days of the campaign, we found that the Duchon and O'Malley classification method obtained a success rate of 93% for cirrus and only 25% for cumulus. New decision criteria were established for some polar cloud classes providing success rates of 94% for cirrus, 67% for cirrostratus and altostratus, and 33% for cumulus and altocumulus. The ratios of the downwelling short-wave irradiance measured for cloudy-sky conditions to that calculated for clear-sky conditions were analysed in terms of the Kasten and Czeplak [Sol. Energy 24 (1980) 177] formula together with simultaneous human observations of cloudiness, to determine the empirical relationship curves providing reliable estimates of cloudiness for each of the three above-mentioned cloud classes. Using these cloudiness estimates, the downwelling long-wave radiation measurements (obtained as differences between the downward fluxes of total and short-wave radiation) were examined to evaluate the downwelling long-wave radiation flux normalised to totally overcast sky conditions. Calculations of the long-wave radiation flux were performed with the MODTRAN 3.7 code [Kneizys, F.X., Abreu, L.W., Anderson, G.P., Chetwynd, J.H., Shettle, E.P., Berk, A., Bernstein, L.S., Robertson, D.C., Acharya, P., Rothman, L.S., Selby, J.E.A., Gallery, W.O., Clough, S.A., 1996. In: Abreu, L.W., Anderson, G.P. (Eds.), The MODTRAN 2/3 Report and LOWTRAN 7 MODEL. Contract F19628-91-C.0132, Phillips Laboratory, Geophysics Directorate, PL/GPOS, Hanscom AFB, MA, 261 pp.] for both clear-sky and cloudy-sky conditions, considering various cloud types characterised by different cloud base altitudes and vertical thicknesses. From these evaluations, best-fit curves of the downwelling long-wave radiation flux were defined as a function of the cloud base height for the three polar cloud classes. Using these relationship curves, average estimates of the cloud base height were obtained from the three corresponding sub-sets of long-wave radiation measurements. The relative frequency histograms of the cloud base height defined by examining these three sub-sets were found to present median values of 4.7, 1.7 and 3.6 km for cirrus, cirrostratus/altostratus and cumulus/altocumulus, respectively, while median values of 6.5, 1.8 and 2.9 km were correspondingly determined by analysing only the measurements taken together with simultaneous cloud observations.

  3. Research on cloud-based remote measurement and analysis system

    NASA Astrophysics Data System (ADS)

    Gao, Zhiqiang; He, Lingsong; Su, Wei; Wang, Can; Zhang, Changfan

    2015-02-01

    The promising potential of cloud computing and its convergence with technologies such as cloud storage, cloud push, mobile computing allows for creation and delivery of newer type of cloud service. Combined with the thought of cloud computing, this paper presents a cloud-based remote measurement and analysis system. This system mainly consists of three parts: signal acquisition client, web server deployed on the cloud service, and remote client. This system is a special website developed using asp.net and Flex RIA technology, which solves the selective contradiction between two monitoring modes, B/S and C/S. This platform supplies customer condition monitoring and data analysis service by Internet, which was deployed on the cloud server. Signal acquisition device is responsible for data (sensor data, audio, video, etc.) collection and pushes the monitoring data to the cloud storage database regularly. Data acquisition equipment in this system is only conditioned with the function of data collection and network function such as smartphone and smart sensor. This system's scale can adjust dynamically according to the amount of applications and users, so it won't cause waste of resources. As a representative case study, we developed a prototype system based on Ali cloud service using the rotor test rig as the research object. Experimental results demonstrate that the proposed system architecture is feasible.

  4. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun

    2018-02-01

    Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.

  5. Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagesse, Brent J

    2011-01-01

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less

  6. The Metadata Cloud: The Last Piece of a Distributed Data System Model

    NASA Astrophysics Data System (ADS)

    King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.

    2012-12-01

    Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.

  7. Analysis of 2015 Winter In-Flight Icing Case Studies with Ground-Based Remote Sensing Systems Compared to In-Situ SLW Sondes

    NASA Technical Reports Server (NTRS)

    Serke, David J.; King, Michael Christopher; Hansen, Reid; Reehorst, Andrew L.

    2016-01-01

    National Aeronautics and Space Administration (NASA) and the National Center for Atmospheric Research (NCAR) have developed an icing remote sensing technology that has demonstrated skill at detecting and classifying icing hazards in a vertical column above an instrumented ground station. This technology has recently been extended to provide volumetric coverage surrounding an airport. Building on the existing vertical pointing system, the new method for providing volumetric coverage utilizes a vertical pointing cloud radar, a multi-frequency microwave radiometer with azimuth and elevation pointing, and a NEXRAD radar. The new terminal area icing remote sensing system processes the data streams from these instruments to derive temperature, liquid water content, and cloud droplet size for each examined point in space. These data are then combined to ultimately provide icing hazard classification along defined approach paths into an airport. To date, statistical comparisons of the vertical profiling technology have been made to Pilot Reports and Icing Forecast Products. With the extension into relatively large area coverage and the output of microphysical properties in addition to icing severity, the use of these comparators is not appropriate and a more rigorous assessment is required. NASA conducted a field campaign during the early months of 2015 to develop a database to enable the assessment of the new terminal area icing remote sensing system and further refinement of terminal area icing weather information technologies in general. In addition to the ground-based remote sensors listed earlier, in-situ icing environment measurements by weather balloons were performed to produce a comprehensive comparison database. Balloon data gathered consisted of temperature, humidity, pressure, super-cooled liquid water content, and 3-D position with time. Comparison data plots of weather balloon and remote measurements, weather balloon flight paths, bulk comparisons of integrated liquid water content and icing cloud extent agreement, and terminal-area hazard displays are presented. Discussions of agreement quality and paths for future development are also included.

  8. First X-ray Statistical Tests for Clumpy-Torus Models: Constraints from RXTEmonitoring of Seyfert AGN

    NASA Astrophysics Data System (ADS)

    Markowitz, Alex; Krumpe, Mirko; Nikutta, R.

    2016-06-01

    In two papers (Markowitz, Krumpe, & Nikutta 2014, and Nikutta et al., in prep.), we derive the first X-ray statistical constraints for clumpy-torus models in Seyfert AGN by quantifying multi-timescale variability in line of-sight X-ray absorbing gas as a function of optical classification.We systematically search for discrete absorption events in the vast archive of RXTE monitoring of 55 nearby type Is and Compton-thin type IIs. We are sensitive to discrete absorption events due to clouds of full-covering, neutral/mildly ionized gas transiting the line of sight. Our results apply to both dusty and non-dusty clumpy media, and probe model parameter space complementary to that for eclipses observed with XMM-Newton, Suzaku, and Chandra.We detect twelve eclipse events in eight Seyferts, roughly tripling the number previously published from this archive. Event durations span hours to years. Most of our detected clouds are Compton-thin, and most clouds' distances from the black hole are inferred to be commensurate with the outer portions of the BLR or the inner regions of infrared-emitting dusty tori.We present the density profiles of the highest-quality eclipse events; the column density profile for an eclipsing cloud in NGC 3783 is doubly spiked, possibly indicating a cloud that is being tidallysheared. We discuss implications for cloud distributions in the context of clumpy-torus models. We calculate eclipse probabilities for orientation-dependent Type I/II unification schemes.We present constraints on cloud sizes, stability, and radial distribution. We infer that clouds' small angular sizes as seen from the SMBH imply 107 clouds required across the BLR + torus. Cloud size is roughly proportional to distance from the black hole, hinting at the formation processes (e.g., disk fragmentation). All observed clouds are sub-critical with respect to tidal disruption; self-gravity alone cannot contain them. External forces, such as magnetic fields or ambient pressure, are needed to contain them; otherwise, clouds must be short-lived.

  9. T-Check in System-of-Systems Technologies: Cloud Computing

    DTIC Science & Technology

    2010-09-01

    T-Check in System-of-Systems Technologies: Cloud Computing Harrison D. Strowd Grace A. Lewis September 2010 TECHNICAL NOTE CMU/SEI-2010... Cloud Computing 1 1.2 Types of Cloud Computing 2 1.3 Drivers and Barriers to Cloud Computing Adoption 5 2 Using the T-Check Method 7 2.1 T-Check...Hypothesis 3 25 3.4.2 Deployment View of the Solution for Testing Hypothesis 3 27 3.5 Selecting Cloud Computing Providers 30 3.6 Implementing the T-Check

  10. A clinical decision-making mechanism for context-aware and patient-specific remote monitoring systems using the correlations of multiple vital signs.

    PubMed

    Forkan, Abdur Rahim Mohammad; Khalil, Ibrahim

    2017-02-01

    In home-based context-aware monitoring patient's real-time data of multiple vital signs (e.g. heart rate, blood pressure) are continuously generated from wearable sensors. The changes in such vital parameters are highly correlated. They are also patient-centric and can be either recurrent or can fluctuate. The objective of this study is to develop an intelligent method for personalized monitoring and clinical decision support through early estimation of patient-specific vital sign values, and prediction of anomalies using the interrelation among multiple vital signs. In this paper, multi-label classification algorithms are applied in classifier design to forecast these values and related abnormalities. We proposed a completely new approach of patient-specific vital sign prediction system using their correlations. The developed technique can guide healthcare professionals to make accurate clinical decisions. Moreover, our model can support many patients with various clinical conditions concurrently by utilizing the power of cloud computing technology. The developed method also reduces the rate of false predictions in remote monitoring centres. In the experimental settings, the statistical features and correlations of six vital signs are formulated as multi-label classification problem. Eight multi-label classification algorithms along with three fundamental machine learning algorithms are used and tested on a public dataset of 85 patients. Different multi-label classification evaluation measures such as Hamming score, F1-micro average, and accuracy are used for interpreting the prediction performance of patient-specific situation classifications. We achieved 90-95% Hamming score values across 24 classifier combinations for 85 different patients used in our experiment. The results are compared with single-label classifiers and without considering the correlations among the vitals. The comparisons show that multi-label method is the best technique for this problem domain. The evaluation results reveal that multi-label classification techniques using the correlations among multiple vitals are effective ways for early estimation of future values of those vitals. In context-aware remote monitoring this process can greatly help the doctors in quick diagnostic decision making. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Forest/non-forest stratification in Georgia with Landsat Thematic Mapper data

    Treesearch

    William H. Cooke

    2000-01-01

    Geographically accurate Forest Inventory and Analysis (FIA) data may be useful for training, classification, and accuracy assessment of Landsat Thematic Mapper (TM) data. Minimum expectation for maps derived from Landsat data is accurate discrimination of several land cover classes. Landsat TM costs have decreased dramatically, but acquiring cloud-free scenes at...

  12. Assessment of 3D cloud radiative transfer effects applied to collocated A-Train data

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Suzuki, K.; Toshiro, I.; Nakajima, T. Y.; Okamoto, H.

    2017-12-01

    This study investigates broadband radiative fluxes in the 3D cloud-laden atmospheres using a 3D radiative transfer (RT) model, MCstar, and the collocated A-Train cloud data. The 3D extinction coefficients are constructed by a newly devised Minimum cloud Information Deviation Profiling Method (MIDPM) that extrapolates CPR radar profiles at nadir into off-nadir regions within MODIS swath based on collocated information of MODIS-derived cloud properties and radar reflectivity profiles. The method is applied to low level maritime water clouds, for which the 3D-RT simulations are performed. The radiative fluxes thus simulated are compared to those obtained from CERES as a way to validate the MIDPM-constructed clouds and our 3D-RT simulations. The results show that the simulated SW flux agrees with CERES values within 8 - 50 Wm-2. One of the large biases occurred by cyclic boundary condition that was required to pose into our computational domain limited to 20km by 20km with 1km resolution. Another source of the bias also arises from the 1D assumption for cloud property retrievals particularly for thin clouds, which tend to be affected by spatial heterogeneity leading to overestimate of the cloud optical thickness. These 3D-RT simulations also serve to address another objective of this study, i.e. to characterize the "observed" specific 3D-RT effects by the cloud morphology. We extend the computational domain to 100km by 100km for this purpose. The 3D-RT effects are characterized by errors of existing 1D approximations to 3D radiation field. The errors are investigated in terms of their dependence on solar zenith angle (SZA) for the satellite-constructed real cloud cases, and we define two indices from the error tendencies. According to the indices, the 3D-RT effects are classified into three types which correspond to different simple three morphologies types, i.e. isolated cloud type, upper cloud-roughened type and lower cloud-roughened type. These 3D-RT effects linked to cloud morphologies are also visualized in the form of the RGB composite maps constructed from MODIS/Aqua three channels, which show cloud optical thickness and cloud height information. Such a classification offers a novel insight into 3D-RT effect in a manner that directly relates to cloud morphology.

  13. Petri net modeling of encrypted information flow in federated cloud

    NASA Astrophysics Data System (ADS)

    Khushk, Abdul Rauf; Li, Xiaozhong

    2017-08-01

    Solutions proposed and developed for the cost-effective cloud systems suffer from a combination of secure private clouds and less secure public clouds. Need to locate applications within different clouds poses a security risk to the information flow of the entire system. This study addresses this by assigning security levels of a given lattice to the entities of a federated cloud system. A dynamic flow sensitive security model featuring Bell-LaPadula procedures is explored that tracks and authenticates the secure information flow in federated clouds. Additionally, a Petri net model is considered as a case study to represent the proposed system and further validate the performance of the said system.

  14. The Oort cloud

    NASA Technical Reports Server (NTRS)

    Marochnik, Leonid S.; Mukhin, Lev M.; Sagdeev, Roald Z.

    1991-01-01

    Views of the large-scale structure of the solar system, consisting of the Sun, the nine planets and their satellites, changed when Oort demonstrated that a gigantic cloud of comets (the Oort cloud) is located on the periphery of the solar system. The following subject areas are covered: (1) the Oort cloud's mass; (2) Hill's cloud mass; (3) angular momentum distribution in the solar system; and (4) the cometary cloud around other stars.

  15. Military clouds: utilization of cloud computing systems at the battlefield

    NASA Astrophysics Data System (ADS)

    Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai

    2012-05-01

    Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.

  16. Annual Forest Monitoring as part of Indonesia's National Carbon Accounting System

    NASA Astrophysics Data System (ADS)

    Kustiyo, K.; Roswintiarti, O.; Tjahjaningsih, A.; Dewanti, R.; Furby, S.; Wallace, J.

    2015-04-01

    Land use and forest change, in particular deforestation, have contributed the largest proportion of Indonesia's estimated greenhouse gas emissions. Indonesia's remaining forests store globally significant carbon stocks, as well as biodiversity values. In 2010, the Government of Indonesia entered into a REDD+ partnership. A spatially detailed monitoring and reporting system for forest change which is national and operating in Indonesia is required for participation in such programs, as well as for national policy reasons including Monitoring, Reporting, and Verification (MRV), carbon accounting, and land-use and policy information. Indonesia's National Carbon Accounting System (INCAS) has been designed to meet national and international policy requirements. The INCAS remote sensing program is producing spatially-detailed annual wall-to-wall monitoring of forest cover changes from time-series Landsat imagery for the whole of Indonesia from 2000 to the present day. Work on the program commenced in 2009, under the Indonesia-Australia Forest Carbon Partnership. A principal objective was to build an operational system in Indonesia through transfer of knowledge and experience, from Australia's National Carbon Accounting System, and adaptation of this experience to Indonesia's requirements and conditions. A semi-automated system of image pre-processing (ortho-rectification, calibration, cloud masking and mosaicing) and forest extent and change mapping (supervised classification of a 'base' year, semi-automated single-year classifications and classification within a multi-temporal probabilistic framework) was developed for Landsat 5 TM and Landsat 7 ETM+. Particular attention is paid to the accuracy of each step in the processing. With the advent of Landsat 8 data and parallel development of processing capability, capacity and international collaborations within the LAPAN Data Centre this processing is being increasingly automated. Research is continuing into improved processing methodology and integration of information from other data sources. This paper presents technical elements of the INCAS remote sensing program and some results of the 2000 - 2012 mapping.

  17. Cloud GIS Based Watershed Management

    NASA Astrophysics Data System (ADS)

    Bediroğlu, G.; Colak, H. E.

    2017-11-01

    In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.

  18. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  19. Exploring the diversity of Jupiter-class planets

    PubMed Central

    Fletcher, Leigh N.; Irwin, Patrick G. J.; Barstow, Joanna K.; de Kok, Remco J.; Lee, Jae-Min; Aigrain, Suzanne

    2014-01-01

    Of the 900+ confirmed exoplanets discovered since 1995 for which we have constraints on their mass (i.e. not including Kepler candidates), 75% have masses larger than Saturn (0.3 MJ), 53% are more massive than Jupiter and 67% are within 1 AU of their host stars. When Kepler candidates are included, Neptune-sized giant planets could form the majority of the planetary population. And yet the term ‘hot Jupiter’ fails to account for the incredible diversity of this class of astrophysical object, which exists on a continuum of giant planets from the cool jovians of our own Solar System to the highly irradiated, tidally locked hot roasters. We review theoretical expectations for the temperatures, molecular composition and cloud properties of hydrogen-dominated Jupiter-class objects under a variety of different conditions. We discuss the classification schemes for these Jupiter-class planets proposed to date, including the implications for our own Solar System giant planets and the pitfalls associated with compositional classification at this early stage of exoplanetary spectroscopy. We discuss the range of planetary types described by previous authors, accounting for (i) thermochemical equilibrium expectations for cloud condensation and favoured chemical stability fields; (ii) the metallicity and formation mechanism for these giant planets; (iii) the importance of optical absorbers for energy partitioning and the generation of a temperature inversion; (iv) the favoured photochemical pathways and expectations for minor species (e.g. saturated hydrocarbons and nitriles); (v) the unexpected presence of molecules owing to vertical mixing of species above their quench levels; and (vi) methods for energy and material redistribution throughout the atmosphere (e.g. away from the highly irradiated daysides of close-in giants). Finally, we discuss the benefits and potential flaws of retrieval techniques for establishing a family of atmospheric solutions that reproduce the available data, and the requirements for future spectroscopic characterization of a set of Jupiter-class objects to test our physical and chemical understanding of these planets. PMID:24664910

  20. Exploring the diversity of Jupiter-class planets.

    PubMed

    Fletcher, Leigh N; Irwin, Patrick G J; Barstow, Joanna K; de Kok, Remco J; Lee, Jae-Min; Aigrain, Suzanne

    2014-04-28

    Of the 900+ confirmed exoplanets discovered since 1995 for which we have constraints on their mass (i.e. not including Kepler candidates), 75% have masses larger than Saturn (0.3 MJ), 53% are more massive than Jupiter and 67% are within 1 AU of their host stars. When Kepler candidates are included, Neptune-sized giant planets could form the majority of the planetary population. And yet the term 'hot Jupiter' fails to account for the incredible diversity of this class of astrophysical object, which exists on a continuum of giant planets from the cool jovians of our own Solar System to the highly irradiated, tidally locked hot roasters. We review theoretical expectations for the temperatures, molecular composition and cloud properties of hydrogen-dominated Jupiter-class objects under a variety of different conditions. We discuss the classification schemes for these Jupiter-class planets proposed to date, including the implications for our own Solar System giant planets and the pitfalls associated with compositional classification at this early stage of exoplanetary spectroscopy. We discuss the range of planetary types described by previous authors, accounting for (i) thermochemical equilibrium expectations for cloud condensation and favoured chemical stability fields; (ii) the metallicity and formation mechanism for these giant planets; (iii) the importance of optical absorbers for energy partitioning and the generation of a temperature inversion; (iv) the favoured photochemical pathways and expectations for minor species (e.g. saturated hydrocarbons and nitriles); (v) the unexpected presence of molecules owing to vertical mixing of species above their quench levels; and (vi) methods for energy and material redistribution throughout the atmosphere (e.g. away from the highly irradiated daysides of close-in giants). Finally, we discuss the benefits and potential flaws of retrieval techniques for establishing a family of atmospheric solutions that reproduce the available data, and the requirements for future spectroscopic characterization of a set of Jupiter-class objects to test our physical and chemical understanding of these planets.

  1. Exploring point-cloud features from partial body views for gender classification

    NASA Astrophysics Data System (ADS)

    Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga

    2012-06-01

    In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further investigation of more complex partial body viewing problems and new methods for estimating the two position coordinates for the axis location and the unknown body orientation angle.

  2. Automatic Cloud Detection from Multi-Temporal Satellite Images: Towards the Use of PLÉIADES Time Series

    NASA Astrophysics Data System (ADS)

    Champion, N.

    2012-08-01

    Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.

  3. Evaluating the spatio-temporal performance of sky imager based solar irradiance analysis and forecasts

    NASA Astrophysics Data System (ADS)

    Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.

    2015-10-01

    Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  4. Generalized interpretation scheme for arbitrary HR InSAR image pairs

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten

    2013-10-01

    Land cover classification of remote sensing imagery is an important topic of research. For example, different applications require precise and fast information about the land cover of the imaged scenery (e.g., disaster management and change detection). Focusing on high resolution (HR) spaceborne remote sensing imagery, the user has the choice between passive and active sensor systems. Passive systems, such as multispectral sensors, have the disadvantage of being dependent from weather influences (fog, dust, clouds, etc.) and time of day, since they work in the visible part of the electromagnetic spectrum. Here, active systems like Synthetic Aperture Radar (SAR) provide improved capabilities. As an interactive method analyzing HR InSAR image pairs, the CovAmCohTM method was introduced in former studies. CovAmCoh represents the joint analysis of locality (coefficient of variation - Cov), backscatter (amplitude - Am) and temporal stability (coherence - Coh). It delivers information on physical backscatter characteristics of imaged scene objects or structures and provides the opportunity to detect different classes of land cover (e.g., urban, rural, infrastructure and activity areas). As example, railway tracks are easily distinguishable from other infrastructure due to their characteristic bluish coloring caused by the gravel between the sleepers. In consequence, imaged objects or structures have a characteristic appearance in CovAmCoh images which allows the development of classification rules. In this paper, a generalized interpretation scheme for arbitrary InSAR image pairs using the CovAmCoh method is proposed. This scheme bases on analyzing the information content of typical CovAmCoh imagery using the semisupervised k-means clustering. It is shown that eight classes model the main local information content of CovAmCoh images sufficiently and can be used as basis for a classification scheme.

  5. Contribution of National near Real Time MODIS Forest Maximum Percentage NDVI Change Products to the U.S. ForWarn System

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.

    2012-01-01

    This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product is computed for the current compositing period and integrated into the change images to account for snow related NDVI drops. The supplemental snow classification product was needed because other available QA cloud/snow mask typically underestimates snow cover. MODIS true and false color composites were also computed from eMODIS reflectance data and the true color RGBs are also posted on ForWarn?s FCAV; this data is used for assessing apparent occasional quality issues on the change products due to residual unmasked cloud cover. New forest change products are posted with typical latencies of 1-2 days after the last input eMODIS data collection date for a given 24 day compositing period.

  6. Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Woodcock, C. E.

    2012-12-01

    A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.

  7. Determination of Classification Accuracy for Land Use/cover Types Using Landsat-Tm Spot-Mss and Multipolarized and Multi-Channel Synthetic Aperture Radar

    NASA Astrophysics Data System (ADS)

    Dondurur, Mehmet

    The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.

  8. Enhancement and evaluation of an algorithm for atmospheric profiling continuity from Aqua to Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.

    2017-12-01

    We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.

  9. In-season wheat sown area mapping for Afghanistan using high resolution optical and RADAR images in cloud platform

    NASA Astrophysics Data System (ADS)

    Matin, M. A.; Tiwari, V. K.; Qamer, F. M.; Yadav, N. K.; Ellenburg, W. L.; Bajracharya, B.; Vadrevu, K.; Rushi, B. R.; Stanikzai, N.; Yusafi, W.; Rahmani, H.

    2017-12-01

    Afghanistan has only 11% of arable land while wheat is the major crop with 80% of total cereal planted area. The production of wheat is therefore highly critical to the food security of the country with population of 35 million among which 30% are food insecure. The lack of timely availability of data on crop sown area and production hinders decision on regular grain import policies as well as log term planning for self-sustainability. The objective of this study is to develop an operational in-season wheat area mapping system to support the Ministry of Agriculture, Irrigation and Livestock (MAIL) for annual food security planning. In this study, we used 10m resolution sentinel - 2 optical images in combination with sentinel - 1 SAR data to classify wheat area. The available provincial crop calendar and field data collected by MAIL was used for classification and validation. Since the internet and computing infrastructure in Afghanistan is very limited thus cloud computing platform of Google Earth Engine (GEE) is used to accomplish this work. During the assessment it is observed that the smaller size of wheat plots and mixing of wheat with other crops makes it difficult to achieve expected accuracy of wheat area particularly in rain fed areas. The cloud cover during the wheat growing season limits the availability of valid optical satellite data. In the first phase of assessment important learnings points were captured. In an extremely challenging security situation field data collection require use of innovative approaches for stratification of sampling sites as well as use of robust mobile app with adequate training of field staff. Currently, GEE assets only contain Sentinel-2 Level 1C product which limits the classification accuracy. In representative areas, where Level 2A product was developed and applied a significant improvement in accuracy is observed. Development of high resolution agro-climatic zones map, will enable extrapolating crop growth calendars, collected from representative areas, across entire study area. While the present study shows a great potential for operational wheat area monitoring, a systematic approach for sample data collection and better understanding of cropping calendar will improve the results significantly.

  10. In-season wheat sown area mapping for Afghanistan using high resolution optical and RADAR images in cloud platform

    NASA Astrophysics Data System (ADS)

    Matin, M. A.; Tiwari, V. K.; Qamer, F. M.; Yadav, N. K.; Ellenburg, W. L.; Bajracharya, B.; Vadrevu, K.; Rushi, B. R.; Stanikzai, N.; Yusafi, W.; Rahmani, H.

    2016-12-01

    Afghanistan has only 11% of arable land while wheat is the major crop with 80% of total cereal planted area. The production of wheat is therefore highly critical to the food security of the country with population of 35 million among which 30% are food insecure. The lack of timely availability of data on crop sown area and production hinders decision on regular grain import policies as well as log term planning for self-sustainability. The objective of this study is to develop an operational in-season wheat area mapping system to support the Ministry of Agriculture, Irrigation and Livestock (MAIL) for annual food security planning. In this study, we used 10m resolution sentinel - 2 optical images in combination with sentinel - 1 SAR data to classify wheat area. The available provincial crop calendar and field data collected by MAIL was used for classification and validation. Since the internet and computing infrastructure in Afghanistan is very limited thus cloud computing platform of Google Earth Engine (GEE) is used to accomplish this work. During the assessment it is observed that the smaller size of wheat plots and mixing of wheat with other crops makes it difficult to achieve expected accuracy of wheat area particularly in rain fed areas. The cloud cover during the wheat growing season limits the availability of valid optical satellite data. In the first phase of assessment important learnings points were captured. In an extremely challenging security situation field data collection require use of innovative approaches for stratification of sampling sites as well as use of robust mobile app with adequate training of field staff. Currently, GEE assets only contain Sentinel-2 Level 1C product which limits the classification accuracy. In representative areas, where Level 2A product was developed and applied a significant improvement in accuracy is observed. Development of high resolution agro-climatic zones map, will enable extrapolating crop growth calendars, collected from representative areas, across entire study area. While the present study shows a great potential for operational wheat area monitoring, a systematic approach for sample data collection and better understanding of cropping calendar will improve the results significantly.

  11. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology

    PubMed Central

    Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804

  12. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology.

    PubMed

    Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.

  13. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)

  14. Conifer health classification for Colorado, 2008

    USGS Publications Warehouse

    Cole, Christopher J.; Noble, Suzanne M.; Blauer, Steven L.; Friesen, Beverly A.; Curry, Stacy E.; Bauer, Mark A.

    2010-01-01

    Colorado has undergone substantial changes in forests due to urbanization, wildfires, insect-caused tree mortality, and other human and environmental factors. The U.S. Geological Survey Rocky Mountain Geographic Science Center evaluated and developed a methodology for applying remotely-sensed imagery for assessing conifer health in Colorado. Two classes were identified for the purposes of this study: healthy and unhealthy (for example, an area the size of a 30- x 30-m pixel with 20 percent or greater visibly dead trees was defined as ?unhealthy?). Medium-resolution Landsat 5 Thematic Mapper imagery were collected. The normalized, reflectance-converted, cloud-filled Landsat scenes were merged to form a statewide image mosaic, and a Normalized Difference Vegetation Index (NDVI) and Renormalized Difference Infrared Index (RDII) were derived. A supervised maximum likelihood classification was done using the Landsat multispectral bands, the NDVI, the RDII, and 30-m U.S. Geological Survey National Elevation Dataset (NED). The classification was constrained to pixels identified in the updated landcover dataset as coniferous or mixed coniferous/deciduous vegetation. The statewide results were merged with a separate health assessment of Grand County, Colo., produced in late 2008. Sampling and validation was done by collecting field data and high-resolution imagery. The 86 percent overall classification accuracy attained in this study suggests that the data and methods used successfully characterized conifer conditions within Colorado. Although forest conditions for Lodgepole Pine (Pinus contorta) are easily characterized, classification uncertainty exists between healthy/unhealthy Ponderosa Pine (Pinus ponderosa), Pi?on (Pinus edulis), and Juniper (Juniperus sp.) vegetation. Some underestimation of conifer mortality in Summit County is likely, where recent (2008) cloud-free imagery was unavailable. These classification uncertainties are primarily due to the spatial and temporal resolution of Landsat, and of the NLCD derived from this sensor. It is believed that high- to moderate-resolution multispectral imagery, coupled with field data, could significantly reduce the uncertainty rates. The USGS produced a four-county follow-up conifer health assessment using high-resolution RapidEye remotely sensed imagery and field data collected in 2009.

  15. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

  16. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  17. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  18. BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie

    Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.

  19. National electronic medical records integration on cloud computing system.

    PubMed

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  20. Orbiting Carbon Observatory-2 (OCO-2) Cloud Screening; Validation Against Collocated MODIS and Initial Comparison to CALIOP Data

    NASA Technical Reports Server (NTRS)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip; Cronk, Heather W.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert; Crisp, David; hide

    2015-01-01

    The retrieval of the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2 ) from satellite measurements of reflected sunlight in the near-infrared can be biased due to contamination by clouds and aerosols within the instrument's field of view (FOV). Therefore, accurate aerosol and cloud screening of soundings is required prior to their use in the computationally expensive XCO2 retrieval algorithm. Robust cloud screening methods have been an important focus of the retrieval algorithm team for the National Aeronautics and Space Administration (NASA) Orbiting Carbon Observatory-2 (OCO-2), which was successfully launched into orbit on July 2, 2014. Two distinct spectrally-based algorithms have been developed for the purpose of cloud clearing OCO-2 soundings. The A-Band Preprocessor (ABP) performs a retrieval of surface pressure using measurements in the 0.76 micron O2 A-band to distinguish changes in the expected photon path length. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) (IDP) algorithm is a non- scattering routine that operates on the O2 A-band as well as two CO2 absorption bands at 1.6 m (weak CO2 band) and 2.0 m (strong CO2 band) to provide band-dependent estimates of CO2 and H2O. Spectral ratios of retrieved CO2 and H2O identify measurements contaminated with cloud and scattering aerosols. Information from the two preprocessors is feed into a sounding selection tool to strategically down select from the order one million daily soundings collected by OCO-2 to a manageable number (order 10 to 20%) to be processed by the OCO-2 L2 XCO2 retrieval algorithm. Regional biases or errors in the selection of clear-sky soundings will introduce errors in the final retrieved XCO2 values, ultimately yielding errors in the flux inversion models used to determine global sources and sinks of CO2. In this work collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, and the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), aboard the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite, are used as a reference to access the accuracy and strengths and weaknesses of the OCO-2 screening algorithms. The combination of the ABP and IDP algorithms is shown to provide very robust and complimentary cloud filtering as compared to the results from MODIS and CALIOP. With idealized algorithm tuning to allow throughputs of 20-25%, correct classification of scenes, i.e., accuracies, are found to be ' 80-90% over several orbit repeat cycles in both the win ter and spring time for the three main viewing configurations of OCO-2; nadir-land, glint-land and glint-water. Investigation unveiled no major spatial or temporal dependencies, although slight differences in the seasonal data sets do exist and classification tends to be more problematic with increasing solar zenith angle and when surfaces are covered in snow and ice. An in depth analysis on both a simulated data set and real OCO-2 measurements against CALIOP highlight the strength of the ABP in identifying high, thin clouds while it often misses clouds near the surface even when the optical thickness is greater than 1. Fortunately, by combining the ABP with the IDP, the number of thick low clouds passing the preprocessors is partially mitigated.

  1. A Terminal Area Icing Remote Sensing System

    NASA Technical Reports Server (NTRS)

    Reehorst, Andrew L.; Serke, David J.

    2014-01-01

    NASA and the National Center for Atmospheric Research (NCAR) have developed an icing remote sensing technology that has demonstrated skill at detecting and classifying icing hazards in a vertical column above an instrumented ground station. This technology is now being extended to provide volumetric coverage surrounding an airport. With volumetric airport terminal area coverage, the resulting icing hazard information will be usable by aircrews, traffic control, and airline dispatch to make strategic and tactical decisions regarding routing when conditions are conducive to airframe icing. Building on the existing vertical pointing system, the new method for providing volumetric coverage will utilize cloud radar, microwave radiometry, and NEXRAD radar. This terminal area icing remote sensing system will use the data streams from these instruments to provide icing hazard classification along the defined approach paths into an airport. Strategies for comparison to in-situ instruments on aircraft and weather balloons for a planned NASA field test are discussed, as are possible future applications into the NextGen airspace system.

  2. 16-Lead ECG Changes with Coronary Angioplasty - Location of ST-T Changes with Balloon Occlusion of Five Arterial Perfusion Beds

    DTIC Science & Technology

    1991-08-01

    Southeastern Center for Electrical Engineering REPORT NUMBER Education (SCEEE) 11th and Massachusetts Avenues St Cloud , FL 34769 D T |C 9. SPONSORING...performing the report. Self-explanatory. Enter U.S. Security Classification in accordance with U.S. Security Block 9. S Qonsorin/ Monitorig Aaencv

  3. Determination of ice water path in ice-over-water cloud systems using combined MODIS and AMSR-E measurements

    NASA Astrophysics Data System (ADS)

    Huang, Jianping; Minnis, Patrick; Lin, Bing; Yi, Yuhong; Fan, T.-F.; Sun-Mack, Sunny; Ayers, J. K.

    2006-11-01

    To provide more accurate ice cloud microphysical properties, the multi-layered cloud retrieval system (MCRS) is used to retrieve ice water path (IWP) in ice-over-water cloud systems globally over oceans using combined instrument data from Aqua. The liquid water path (LWP) of lower-layer water clouds is estimated from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) measurements. The properties of the upper-level ice clouds are then derived from Moderate Resolution Imaging Spectroradiometer (MODIS) measurements by matching simulated radiances from a two-cloud-layer radiative transfer model. The results show that the MCRS can significantly improve the accuracy and reduce the over-estimation of optical depth and IWP retrievals for ice-over-water cloud systems. The mean daytime ice cloud optical depth and IWP for overlapped ice-over-water clouds over oceans from Aqua are 7.6 and 146.4 gm-2, respectively, down from the initial single-layer retrievals of 17.3 and 322.3 gm-2. The mean IWP for actual single-layer clouds is 128.2 gm-2.

  4. Synergistic use of MODIS cloud products and AIRS radiance measurements for retrieval of cloud parameters

    NASA Astrophysics Data System (ADS)

    Li, J.; Menzel, W.; Sun, F.; Schmit, T.

    2003-12-01

    The Moderate-Resolution Imaging Spectroradiometer (MODIS) and Atmospheric Infrared Sounder (AIRS) measurements from the Earth Observing System's (EOS) Aqua satellite will enable global monitoring of the distribution of clouds. MODIS is able to provide at high spatial resolution (1 ~ 5km) the cloud mask, surface and cloud types, cloud phase, cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), and cloud water path (CWP). AIRS is able to provide CTP, ECA, CPS, and CWP within the AIRS footprint with much better accuracy using its greatly enhanced hyperspectral remote sensing capability. The combined MODIS / AIRS system offers the opportunity for cloud products improved over those possible from either system alone. The algorithm developed was applied to process the AIRS longwave cloudy radiance measurements; results are compared with MODIS cloud products, as well as with the Geostationary Operational Environmental Satellite (GOES) sounder cloud products, to demonstrate the advantage of synergistic use of high spatial resolution MODIS cloud products and high spectral resolution AIRS sounder radiance measurements for optimal cloud retrieval. Data from ground-based instrumentation at the Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Test Bed (CART) in Oklahoma were used for the validation; results show that AIRS improves the MODIS cloud products in certain cases such as low-level clouds.

  5. Impact of large-scale dynamics on the microphysical properties of midlatitude cirrus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlbauer, Andreas; Ackerman, Thomas P.; Comstock, Jennifer M.

    2014-04-16

    In situ microphysical observations 3 of mid-latitude cirrus collected during the Department of Energy Small Particles in Cirrus (SPAR-TICUS) field campaign are combined with an atmospheric state classification for the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site to understand statistical relationships between cirrus microphysics and the large-scale meteorology. The atmospheric state classification is informed about the large-scale meteorology and state of cloudiness at the ARM SGP site by combining ECMWF ERA-Interim reanalysis data with 14 years of continuous observations from the millimeter-wavelength cloud radar. Almost half of the cirrus cloud occurrences in the vicinity of the ARM SGPmore » site during SPARTICUS can be explained by three distinct synoptic condi- tions, namely upper-level ridges, mid-latitude cyclones with frontal systems and subtropical flows. Probability density functions (PDFs) of cirrus micro- physical properties such as particle size distributions (PSDs), ice number con- centrations and ice water content (IWC) are examined and exhibit striking differences among the different synoptic regimes. Generally, narrower PSDs with lower IWC but higher ice number concentrations are found in cirrus sam- pled in upper-level ridges whereas cirrus sampled in subtropical flows, fronts and aged anvils show broader PSDs with considerably lower ice number con- centrations but higher IWC. Despite striking contrasts in the cirrus micro- physics for different large-scale environments, the PDFs of vertical velocity are not different, suggesting that vertical velocity PDFs are a poor predic-tor for explaining the microphysical variability in cirrus. Instead, cirrus mi- crophysical contrasts may be driven by differences in ice supersaturations or aerosols.« less

  6. Classification and global distribution of ocean precipitation types based on satellite passive microwave signatures

    NASA Astrophysics Data System (ADS)

    Gautam, Nitin

    The main objectives of this thesis are to develop a robust statistical method for the classification of ocean precipitation based on physical properties to which the SSM/I is sensitive and to examine how these properties vary globally and seasonally. A two step approach is adopted for the classification of oceanic precipitation classes from multispectral SSM/I data: (1)we subjectively define precipitation classes using a priori information about the precipitating system and its possible distinct signature on SSM/I data such as scattering by ice particles aloft in the precipitating cloud, emission by liquid rain water below freezing level, the difference of polarization at 19 GHz-an indirect measure of optical depth, etc.; (2)we then develop an objective classification scheme which is found to reproduce the subjective classification with high accuracy. This hybrid strategy allows us to use the characteristics of the data to define and encode classes and helps retain the physical interpretation of classes. The classification methods based on k-nearest neighbor and neural network are developed to objectively classify six precipitation classes. It is found that the classification method based neural network yields high accuracy for all precipitation classes. An inversion method based on minimum variance approach was used to retrieve gross microphysical properties of these precipitation classes such as column integrated liquid water path, column integrated ice water path, and column integrated min water path. This classification method is then applied to 2 years (1991-92) of SSM/I data to examine and document the seasonal and global distribution of precipitation frequency corresponding to each of these objectively defined six classes. The characteristics of the distribution are found to be consistent with assumptions used in defining these six precipitation classes and also with well known climatological patterns of precipitation regions. The seasonal and global distribution of these six classes is also compared with the earlier results obtained from Comprehensive Ocean Atmosphere Data Sets (COADS). It is found that the gross pattern of the distributions obtained from SSM/I and COADS data match remarkably well with each other.

  7. Heterogeneous access and processing of EO-Data on a Cloud based Infrastructure delivering operational Products

    NASA Astrophysics Data System (ADS)

    Niggemann, F.; Appel, F.; Bach, H.; de la Mar, J.; Schirpke, B.; Dutting, K.; Rucker, G.; Leimbach, D.

    2015-04-01

    To address the challenges of effective data handling faced by Small and Medium Sized Enterprises (SMEs) a cloud-based infrastructure for accessing and processing of Earth Observation(EO)-data has been developed within the project APPS4GMES(www.apps4gmes.de). To gain homogenous multi mission data access an Input Data Portal (IDP) been implemented on this infrastructure. The IDP consists of an Open Geospatial Consortium (OGC) conformant catalogue, a consolidation module for format conversion and an OGC-conformant ordering framework. Metadata of various EO-sources and with different standards is harvested and transferred to an OGC conformant Earth Observation Product standard and inserted into the catalogue by a Metadata Harvester. The IDP can be accessed for search and ordering of the harvested datasets by the services implemented on the cloud infrastructure. Different land-surface services have been realised by the project partners, using the implemented IDP and cloud infrastructure. Results of these are customer ready products, as well as pre-products (e.g. atmospheric corrected EO data), serving as a basis for other services. Within the IDP an automated access to ESA's Sentinel-1 Scientific Data Hub has been implemented. Searching and downloading of the SAR data can be performed in an automated way. With the implementation of the Sentinel-1 Toolbox and own software, for processing of the datasets for further use, for example for Vista's snow monitoring, delivering input for the flood forecast services, can also be performed in an automated way. For performance tests of the cloud environment a sophisticated model based atmospheric correction and pre-classification service has been implemented. Tests conducted an automated synchronised processing of one entire Landsat 8 (LS-8) coverage for Germany and performance comparisons to standard desktop systems. Results of these tests, showing a performance improvement by the factor of six, proved the high flexibility and computing power of the cloud environment. To make full use of the cloud capabilities a possibility for automated upscaling of the hardware resources has been implemented. Together with the IDP infrastructure fast and automated processing of various satellite sources to deliver market ready products can be realised, thus increasing customer needs and numbers can be satisfied without loss of accuracy and quality.

  8. Detection of ground fog in mountainous areas from MODIS (Collection 051) daytime data using a statistical approach

    NASA Astrophysics Data System (ADS)

    Schulz, Hans Martin; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg

    2016-03-01

    The mountain cloud forest of Taiwan can be delimited from other forest types using a map of the ground fog frequency. In order to create such a frequency map from remotely sensed data, an algorithm able to detect ground fog is necessary. Common techniques for ground fog detection based on weather satellite data cannot be applied to fog occurrences in Taiwan as they rely on several assumptions regarding cloud properties. Therefore a new statistical method for the detection of ground fog in mountainous terrain from MODIS Collection 051 data is presented. Due to the sharpening of input data using MODIS bands 1 and 2, the method provides fog masks in a resolution of 250 m per pixel. The new technique is based on negative correlations between optical thickness and terrain height that can be observed if a cloud that is relatively plane-parallel is truncated by the terrain. A validation of the new technique using camera data has shown that the quality of fog detection is comparable to that of another modern fog detection scheme developed and validated for the temperate zones. The method is particularly applicable to optically thinner water clouds. Beyond a cloud optical thickness of ≈ 40, classification errors significantly increase.

  9. Comparison of the filtering models for airborne LiDAR data by three classifiers with exploration on model transfer

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Cai, Zhan; Zhang, Liang

    2018-01-01

    This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.

  10. Classification Algorithms for Big Data Analysis, a Map Reduce Approach

    NASA Astrophysics Data System (ADS)

    Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.

    2015-03-01

    Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.

  11. The Clouds distributed operating system - Functional description, implementation details and related work

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.; Appelbe, William F.

    1988-01-01

    Clouds is an operating system in a novel class of distributed operating systems providing the integration, reliability, and structure that makes a distributed system usable. Clouds is designed to run on a set of general purpose computers that are connected via a medium-of-high speed local area network. The system structuring paradigm chosen for the Clouds operating system, after substantial research, is an object/thread model. All instances of services, programs and data in Clouds are encapsulated in objects. The concept of persistent objects does away with the need for file systems, and replaces it with a more powerful concept, namely the object system. The facilities in Clouds include integration of resources through location transparency; support for various types of atomic operations, including conventional transactions; advanced support for achieving fault tolerance; and provisions for dynamic reconfiguration.

  12. Privacy-Preserving Patient-Centric Clinical Decision Support System on Naïve Bayesian Classification.

    PubMed

    Liu, Ximeng; Lu, Rongxing; Ma, Jianfeng; Chen, Le; Qin, Baodong

    2016-03-01

    Clinical decision support system, which uses advanced data mining techniques to help clinician make proper decisions, has received considerable attention recently. The advantages of clinical decision support system include not only improving diagnosis accuracy but also reducing diagnosis time. Specifically, with large amounts of clinical data generated everyday, naïve Bayesian classification can be utilized to excavate valuable information to improve a clinical decision support system. Although the clinical decision support system is quite promising, the flourish of the system still faces many challenges including information security and privacy concerns. In this paper, we propose a new privacy-preserving patient-centric clinical decision support system, which helps clinician complementary to diagnose the risk of patients' disease in a privacy-preserving way. In the proposed system, the past patients' historical data are stored in cloud and can be used to train the naïve Bayesian classifier without leaking any individual patient medical data, and then the trained classifier can be applied to compute the disease risk for new coming patients and also allow these patients to retrieve the top- k disease names according to their own preferences. Specifically, to protect the privacy of past patients' historical data, a new cryptographic tool called additive homomorphic proxy aggregation scheme is designed. Moreover, to leverage the leakage of naïve Bayesian classifier, we introduce a privacy-preserving top- k disease names retrieval protocol in our system. Detailed privacy analysis ensures that patient's information is private and will not be leaked out during the disease diagnosis phase. In addition, performance evaluation via extensive simulations also demonstrates that our system can efficiently calculate patient's disease risk with high accuracy in a privacy-preserving way.

  13. Complexity in Climatic Controls on Plant Species Distribution: Satellite Data Reveal Unique Climate for Giant Sequoia in the California Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Waller, Eric Kindseth

    A better understanding of the environmental controls on current plant species distribution is essential if the impacts of such diverse challenges as invasive species, changing fire regimes, and global climate change are to be predicted and important diversity conserved. Climate, soil, hydrology, various biotic factors fire, history, and chance can all play a role, but disentangling these factors is a daunting task. Increasingly sophisticated statistical models relying on existing distributions and mapped climatic variables, among others, have been developed to try to answer these questions. Any failure to explain pattern with existing mapped climatic variables is often taken as a referendum on climate as a whole, rather than on the limitations of the particular maps or models. Every location has a unique and constantly changing climate so that any distribution could be explained by some aspect of climate. Chapter 1 of this dissertation reviews some of the major flaws in species distribution modeling and addresses concerns that climate may therefore not be predictive of, or even relevant to, species distributions. Despite problems with climate-based models, climate and climate-derived variables still have substantial merit for explaining species distribution patterns. Additional generation of relevant climate variables and improvements in other climate and climate-derived variables are still needed to demonstrate this more effectively. Satellite data have a long history of being used for vegetation mapping and even species distribution mapping. They have great potential for being used for additional climatic information, and for improved mapping of other climate and climate-derived variables. Improving the characterization of cloud cover frequency with satellite data is one way in which the mapping of important climate and climate-derived variables can be improved. An important input to water balance models, solar radiation maps could be vastly improved with a better mapping of spatial and temporal patterns in cloud cover. Chapter 2 of this dissertation describes the generation of custom daily cloud cover maps from Advanced Very High Resolution Radiometer (AVHRR) satellite data from 1981-1999 at ~5 km resolution and Moderate Resolution Imagine Spectroradiomter (MODIS) satellite reflectance data at ~500 meter resolution for much of the western U.S., from 2000 to 2012. Intensive comparisons of reflectance spectra from a variety of cloud and snow-covered scenes from the southwestern United States allowed the generation of new rules for the classification of clouds and snow in both the AVHRR and MODIS data. The resulting products avoid many of the problems that plague other cloud mapping efforts, such as the tendency for snow cover and bright desert soils to be mapped as cloud. This consistency in classification across cover types is critically important for any distribution modeling of a plant species that might be dependent on cloud cover. In Chapter 3, monthly cloud frequencies derived from the daily classifications were used directly in species distribution models for giant sequoia and were found to be the strongest predictors of giant sequoia distribution. A high frequency of cloud cover, especially in the spring, differentiated the climate of the west slope of the southern Sierra Nevada, where giant sequoia are prolific, from central and northern parts of the range, where the tree is rare and generally absent. Other mapped cloud products, contaminated by confusion with high elevation snow, would likely not have found this important result. The result illustrates the importance of accuracy in mapping as well as the importance of previously overlooked aspects of climate for species distribution modeling. But it also raises new questions about why the clouds form where they do and whether they might be associated with other aspects of climate important to giant sequoia distribution. What are the exact climatic mechanisms governing the distribution? Detailed aspects of the local climate warranted more investigation. Chapter 4 investigates the climate associated with the frequent cloud formation over the western slopes of the southern Sierra Nevada: the "sequoia belt". This region is climatically distinct in a number of ways, all of which could be factors in influencing the distribution of giant sequoia and other species. Satellite and micrometeorological flux tower data reveal characteristics of the sequoia belt that were not evident with surface climate measurements and maps derived from them. Results have implications for species distributions everywhere, but especially in rugged mountains, where climates are complex and poorly mapped. Chapter 5 summarizes some of the main conclusions from the work and suggests directions for related future research. (Abstract shortened by UMI.).

  14. Determination of Ice Water Path in Ice-over-Water Cloud Systems Using Combined MODIS and AMSR-E Measurements

    NASA Technical Reports Server (NTRS)

    Huang, Jianping; Minnis, Patrick; Lin, Bing; Yi, Yuhong; Fan, T.-F.; Sun-Mack, Sunny; Ayers, J. K.

    2006-01-01

    To provide more accurate ice cloud properties for evaluating climate models, the updated version of multi-layered cloud retrieval system (MCRS) is used to retrieve ice water path (IWP) in ice-over-water cloud systems over global ocean using combined instrument data from the Aqua satellite. The liquid water path (LWP) of lower layer water clouds is estimated from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) measurements. With the lower layer LWP known, the properties of the upper-level ice clouds are then derived from Moderate Resolution Imaging Spectroradiometer measurements by matching simulated radiances from a two-cloud layer radiative transfer model. Comparisons with single-layer cirrus systems and surface-based radar retrievals show that the MCRS can significantly improve the accuracy and reduce the over-estimation of optical depth and ice water path retrievals for ice over-water cloud systems. During the period from December 2004 through February 2005, the mean daytime ice cloud optical depth and IWP for overlapped ice-over-water clouds over ocean from Aqua are 7.6 and 146.4 gm(sup -2), respectively, significantly less than the initial single layer retrievals of 17.3 and 322.3 gm(sup -2). The mean IWP for actual single-layer clouds was 128.2 gm(sup -2).

  15. Star clusters in the Magellanic Clouds - I. Parametrization and classification of 1072 clusters in the LMC

    NASA Astrophysics Data System (ADS)

    Nayak, P. K.; Subramaniam, A.; Choudhury, S.; Indu, G.; Sagar, Ram

    2016-12-01

    We have introduced a semi-automated quantitative method to estimate the age and reddening of 1072 star clusters in the Large Magellanic Cloud (LMC) using the Optical Gravitational Lensing Experiment III survey data. This study brings out 308 newly parametrized clusters. In a first of its kind, the LMC clusters are classified into groups based on richness/mass as very poor, poor, moderate and rich clusters, similar to the classification scheme of open clusters in the Galaxy. A major cluster formation episode is found to happen at 125 ± 25 Myr in the inner LMC. The bar region of the LMC appears prominently in the age range 60-250 Myr and is found to have a relatively higher concentration of poor and moderate clusters. The eastern and the western ends of the bar are found to form clusters initially, which later propagates to the central part. We demonstrate that there is a significant difference in the distribution of clusters as a function of mass, using a movie based on the propagation (in space and time) of cluster formation in various groups. The importance of including the low-mass clusters in the cluster formation history is demonstrated. The catalogue with parameters, classification, and cleaned and isochrone fitted colour-magnitude diagrams of 1072 clusters, which are available as online material, can be further used to understand the hierarchical formation of clusters in selected regions of the LMC.

  16. Synthetic aperture radar for a crop information system: A multipolarization and multitemporal approach

    NASA Astrophysics Data System (ADS)

    Ban, Yifang

    Acquisition of timely information is a critical requirement for successful management of an agricultural monitoring system. Crop identification and crop-area estimation can be done fairly successfully using satellite sensors operating in the visible and near-infrared (VIR) regions of the spectrum. However, data collection can be unreliable due to problems of cloud cover at critical stages of the growing season. The all-weather capability of synthetic aperture radar (SAR) imagery acquired from satellites provides data over large areas whenever crop information is required. At the same time, SAR is sensitive to surface roughness and should be able to provide surface information such as tillage-system characteristics. With the launch of ERS-1, the first long-duration SAR system became available. The analysis of airborne multipolarization SAR data, multitemporal ERS-1 SAR data, and their combinations with VIR data, is necessary for the development of image-analysis methodologies that can be applied to RADARSAT data for extracting agricultural crop information. The overall objective of this research is to evaluate multipolarization airborne SAR data, multitemporal ERS-1 SAR data, and combinations of ERS-1 SAR and satellite VIR data for crop classification using non-conventional algorithms. The study area is situated in Norwich Township, an agricultural area in Oxford County, southern Ontario, Canada. It has been selected as one of the few representative agricultural 'supersites' across Canada at which the relationships between radar data and agriculture are being studied. The major field crops are corn, soybeans, winter wheat, oats, barley, alfalfa, hay, and pasture. Using airborne C-HH and C-HV SAR data, it was found that approaches using contextual information, texture information and per-field classification for improving agricultural crop classification proved to be effective, especially the per-field classification method. Results show that three of the four best per-field classification accuracies (\\ K=0.91) are achieved using combinations of C-HH and C-VV SAR data. This confirms the strong potential of multipolarization data for crop classification. The synergistic effects of multitemporal ERS-1 SAR and Landsat TM data are evaluated for crop classification using an artificial neural network (ANN) approach. The results show that the per-field approach using a feed-forward ANN significantly improves the overall classification accuracy of both single-date and multitemporal SAR data. Using the combination of TM3,4,5 and Aug. 5 SAR data, the best per-field ANN classification of 96.8% was achieved. It represents an 8.5% improvement over a single TM3,4,5 classification alone. Using multitemporal ERS-1 SAR data acquired during the 1992 and 1993 growing seasons, the radar backscatter characteristics of crops and their underlying soils are analyzed. The SAR temporal backscatter profiles were generated for each crop type and the earliest times of the year for differentiation of individual crop types were determined. Orbital (incidence-angle) effects were also observed on all crops. The average difference between the two orbits was about 3 dB. Thus attention should be given to the local incidence-angle effects when using ERS-1 SAR data, especially when comparing fields from different scenes or different areas within the same scene. Finally, early- and mid-season multitemporal SAR data for crop classification using sequential-masking techniques are evaluated, based on the temporal backscatter profiles. It was found that all crops studied could be identified by July 21.

  17. First X-ray Statistical Tests for Clumpy Torii Models: Constraints from RXTE monitoring of Seyfert AGN

    NASA Astrophysics Data System (ADS)

    Markowitz, A.

    2015-09-01

    We summarize two papers providing the first X-ray-derived statistical constraints for both clumpy-torus model parameters and cloud ensemble properties. In Markowitz, Krumpe, & Nikutta (2014), we explored multi-timescale variability in line-of-sight X-ray absorbing gas as a function of optical classification. We examined 55 Seyferts monitored with the Rossi X-ray Timing Explorer, and found in 8 objects a total of 12 eclipses, with durations between hours and years. Most clouds are commensurate with the outer portions of the BLR, or the inner regions of infrared-emitting dusty tori. The detection of eclipses in type Is disfavors sharp-edged tori. We provide probabilities to observe a source undergoing an absorption event for both type Is and IIs, yielding constraints in [N_0, sigma, i] parameter space. In Nikutta et al., in prep., we infer that the small cloud angular sizes, as seen from the SMBH, imply the presence of >10^7 clouds in BLR+torus to explain observed covering factors. Cloud size is roughly proportional to distance from the SMBH, hinting at the formation processes (e.g. disk fragmentation). All observed clouds are sub-critical with respect to tidal disruption; self-gravity alone cannot contain them. External forces (e.g. magnetic fields, ambient pressure) are needed to contain them, or otherwise the clouds must be short-lived. Finally, we infer that the radial cloud density distribution behaves as 1/r^{0.7}, compatible with VLTI observations. Our results span both dusty and non-dusty clumpy media, and probe model parameter space complementary to that for short-term eclipses observed with XMM-Newton, Suzaku, and Chandra.

  18. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  19. A Local Index of Cloud Immersion in Tropical Forests Using Time-Lapse Photography

    NASA Astrophysics Data System (ADS)

    Bassiouni, M.; Scholl, M. A.

    2015-12-01

    Data on the frequency, duration and elevation of cloud immersion is essential to improve estimates of cloud water deposition in water budgets in cloud forests. Here, we present a methodology to detect local cloud immersion in remote tropical forests using time-lapse photography. A simple approach is developed to detect cloudy conditions in photographs within the canopy where image depth during clear conditions may be less than 10 meters and moving leaves and branches and changes in lighting are unpredictable. A primary innovation of this study is that cloudiness is determined from images without using a reference clear image and without minimal threshold value determination or human judgment for calibration. Five sites ranging from 600 to 1000 meters elevation along a ridge in the Luquillo Critical Zone Observatory, Puerto Rico were each equipped with a trail camera programmed to take an image every 30 minutes since March 2014. Images were classified using four selected cloud-sensitive image characteristics (SCICs) computed for small image regions: contrast, the coefficient of variation and the entropy of the luminance of each image pixel, and image colorfulness. K-means clustering provided reasonable results to discriminate cloudy from clear conditions. Preliminary results indicate that 79-94% (daytime) and 85-93% (nighttime) of validation images were classified accurately at one open and two closed canopy sites. The euclidian distances between SCICs vectors of images during cloudy conditions and the SCICs vector of the centroid of the cluster of clear images show potential to quantify cloud density in addition to immersion. The classification method will be applied to determine spatial and temporal patterns of cloud immersion in the study area. The presented approach offers promising applications to increase observations of low-lying clouds at remote mountain sites where standard instruments to measure visibility and cloud base may not be practical.

  20. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  1. Upper tropospheric cloud systems determined from IR Sounders and their influence on the atmosphere

    NASA Astrophysics Data System (ADS)

    Stubenrauch, Claudia; Protopapadaki, Sofia; Feofilov, Artem; Velasco, Carola Barrientos

    2017-02-01

    Covering about 30% of the Earth, upper tropospheric clouds play a key role in the climate system by modulating the Earth's energy budget and heat transport. Infrared Sounders reliably identify cirrus down to an IR optical depth of 0.1. Recently LMD has built global cloud climate data records from AIRS and IASI observations, covering the periods from 2003-2015 and 2008-2015, respectively. Upper tropospheric clouds often form mesoscale systems. Their organization and properties are being studied by (1) distinguishing cloud regimes within 2° × 2° regions and (2) applying a spatial composite technique on adjacent cloud pressures, which estimates the horizontal extent of the mesoscale cloud systems. Convective core, cirrus anvil and thin cirrus of these systems are then distinguished by their emissivity. Compared to other studies of tropical mesoscale convective systems our data include also the thinner anvil parts, which make out about 30% of the area of tropical mesoscale convective systems. Once the horizontal and vertical structure of these upper tropospheric cloud systems is known, we can estimate their radiative effects in terms of top of atmosphere and surface radiative fluxes and by computing their heating rates.

  2. A macrophysical life cycle description for precipitating systems

    NASA Astrophysics Data System (ADS)

    Evaristo, Raquel; Xie, Xinxin; Troemel, Silke; Diederich, Malte; Simon, Juergen; Simmer, Clemens

    2014-05-01

    The lack of understanding of cloud and precipitation processes is still the overarching problem of climate simulation, and prediction. The work presented is part of the HD(CP)2 project (High Definition Clouds and Precipitation for Advancing Climate Predictions) which aims at building a very high resolution model in order to evaluate and exploit regional hindcasts for the purpose of parameterization development. To this end, an observational object-based climatology for precipitation systems will be built, and shall later be compared with a twin model-based climatological data base for pseudo precipitation events within an event-based model validation approach. This is done by identifying internal structures, described by means of macrophysical descriptors used to characterize the temporal development of tracked rain events. 2 pre-requisites are necessary for this: 1) a tracking algorithm, and 2) 3D radar/satellite composite. Both prerequisites are ready to be used, and have already been applied to a few case studies. Some examples of these macrophysical descriptors are differential reflectivity columns, bright band fraction and trend, cloud top heights, the spatial extent of updrafts or downdrafts or the ice content. We will show one case study from August 5th 2012, when convective precipitation was observed simultaneously by the BOXPOL and JUXPOL X-band polarimetric radars. We will follow the main paths identified by the tracking algorithm during this event and identify in the 3D composite the descriptors that characterize precipitation development, their temporal evolution, and the different macrophysical processes that are ultimately related to the precipitation observed. In a later stage these observations will be compared to the results of hydrometeor classification algorithm, in order to link the macrophysical and microphysical aspects of the storm evolution. The detailed microphysical processes are the subject of a closely related work also presented in this session: Microphysical processes observed by X band polarimetric radars during the evolution of storm systems, by Xinxin Xie et al.

  3. SAR data for river ice monitoring. How to meet requirements?

    NASA Astrophysics Data System (ADS)

    Łoś, Helena; Osińska-Skotak, Katarzyna; Pluto-Kossakowska, Joanna

    2017-04-01

    Although river ice is a natural element of rivers regime it can lead to severe problems such as winter floods or damages of bridges and bank revetments. Services that monitor river ice condition are still often based on field observation. For several year, however, Earth observation data have become of a great interest, especially SAR images, which allows to observe ice and river condition independently of clouds and sunlight. One of requirements of an effective monitoring system is frequent and regular data acquisition. To help to meet this requirement we assessed an impact of selected SAR data parameters into automatic ice types identification. Presented work consists of two parts. The first one focuses on comparison of C-band and X-band data in terms of the main ice type detection. The second part contains an analysis of polarisation reduction from quad-pol to dual-pol data. As the main element of data processing we chose the supervised classification with maximum likelihood algorithm adapted to Wishart distribution. The classification was preceded by statistical analysis of radar signal obtained for selected ice types including separability measures. Two river were selected as areas of interest - the Peace River in Canada and the Vistula in Poland. The results shows that using data registered in both bands similar accuracy of classification into main ice types can be obtain. Differences appear with details e.g. thin initial ice. Classification results obtained from quad-pol and dual-pol data were similar while four classes were selected. With six classes, however, differences between polarisation types have been noticed.

  4. Morphology and ionization of the interstellar cloud surrounding the solar system.

    PubMed

    Frisch, P C

    1994-09-02

    The first encounter between the sun and the surrounding interstellar cloud appears to have occurred 2000 to 8000 years ago. The sun and cloud space motions are nearly perpendicular, an indication that the sun is skimming the cloud surface. The electron density derived for the surrounding cloud from the carbon component of the anomalous cosmic ray population in the solar system and from the interstellar ratio of Mg(+) to Mg degrees toward Sirius support an equilibrium model for cloud ionization (an electron density of 0.22 to 0.44 per cubic centimeter). The upwind magnetic field direction is nearly parallel to the cloud surface. The relative sun-cloud motion indicates that the solar system has a bow shock.

  5. The Discovery of Herbig–Haro Objects in LDN 673

    NASA Astrophysics Data System (ADS)

    Rector, T. A.; Shuping, R. Y.; Prato, L.; Schweiker, H.

    2018-01-01

    We report the discovery of 12 faint Herbig–Haro (HH) objects in LDN 673 found using a novel color-composite imaging method that reveals faint Hα emission in complex environments. Follow-up observations in [S II] confirmed their classification as HH objects. Potential driving sources are identified from the Spitzer c2d Legacy Program catalog and other infrared observations. The 12 new HH objects can be divided into three groups: four are likely associated with a cluster of eight young stellar object class I/II IR sources that lie between them; five are colinear with the T Tauri multiple star system AS 353, and are likely driven by the same source as HH 32 and HH 332 and three are bisected by a very red source that coincides with an infrared dark cloud. We also provide updated coordinates for the three components of HH 332. Inaccurate numbers were given for this object in the discovery paper. The discovery of HH objects and associated driving sources in this region provides new evidence for star formation in the Aquila clouds, implying a much larger T Tauri population in a seldom-studied region.

  6. UAS-based automatic bird count of a common gull colony

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G. J.

    2013-08-01

    The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.

  7. Clouds Aerosols Internal Affaires: Increasing Cloud Fraction and Enhancing the Convection

    NASA Technical Reports Server (NTRS)

    Koren, Ilan; Kaufman, Yoram; Remer, Lorraine; Rosenfeld, Danny; Rudich, Yinon

    2004-01-01

    Clouds developing in a polluted environment have more numerous, smaller cloud droplets that can increase the cloud lifetime and liquid water content. Such changes in the cloud droplet properties may suppress low precipitation allowing development of a stronger convection and higher freezing level. Delaying the washout of the cloud water (and aerosol), and the stronger convection will result in higher clouds with longer life time and larger anvils. We show these effects by using large statistics of the new, 1km resolution data from MODIS on the Terra satellite. We isolate the aerosol effects from meteorology by regression and showing that aerosol microphysical effects increases cloud fraction by average of 30 presents for all cloud types and increases convective cloud top pressure by average of 35mb. We analyze the aerosol cloud interaction separately for high pressure trade wind cloud systems and separately for deep convective cloud systems. The resultant aerosol radiative effect on climate for the high pressure cloud system is: -10 to -13 W/sq m at the top of the atmosphere (TOA) and -11 to -14 W/sq m at the surface. For deeper convective clouds the forcing is: -4 to -5 W/sq m at the TOA and -6 to -7 W/sq m at the surface.

  8. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    NASA Astrophysics Data System (ADS)

    Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev

    2016-03-01

    Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  9. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  10. Use of observational and model-derived fields and regime model output statistics in mesoscale forecasting

    NASA Technical Reports Server (NTRS)

    Forbes, G. S.; Pielke, R. A.

    1985-01-01

    Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.

  11. Variable Stars in Large Magellanic Cloud Globular Clusters. III. Reticulum

    NASA Astrophysics Data System (ADS)

    Kuehn, Charles A.; Dame, Kyra; Smith, Horace A.; Catelan, Márcio; Jeon, Young-Beom; Nemec, James M.; Walker, Alistair R.; Kunder, Andrea; Pritzl, Barton J.; De Lee, Nathan; Borissova, Jura

    2013-06-01

    This is the third in a series of papers studying the variable stars in old globular clusters in the Large Magellanic Cloud. The primary goal of this series is to look at how the characteristics and behavior of RR Lyrae stars in Oosterhoff-intermediate systems compare to those of their counterparts in Oosterhoff-I/II systems. In this paper we present the results of our new time-series BVI photometric study of the globular cluster Reticulum. We found a total of 32 variables stars (22 RRab, 4 RRc, and 6 RRd stars) in our field of view. We present photometric parameters and light curves for these stars. We also present physical properties, derived from Fourier analysis of light curves, for some of the RR Lyrae stars. We discuss the Oosterhoff classification of Reticulum and use our results to re-derive the distance modulus and age of the cluster. Based on observations taken with the SMARTS 1.3 m telescope operated by the SMARTS Consortium and observations taken at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  12. Context-aware distributed cloud computing using CloudScheduler

    NASA Astrophysics Data System (ADS)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  13. Impacts and Opportunities for Engineering in the Era of Cloud Computing Systems

    DTIC Science & Technology

    2012-01-31

    2012 UNCLASSIFIED 1 of 58 Impacts and Opportunities for Engineering in the Era of Cloud Computing Systems A Report to the U.S. Department...2.1.7 Engineering of Computational Behavior .............................................................18 2.2 How the Cloud Will Impact Systems...58 Executive Summary This report discusses the impact of cloud computing and the broader revolution in computing on systems, on the disciplines of

  14. Temporal variation of the cloud top height over the tropical Pacific observed by geostationary satellites

    NASA Astrophysics Data System (ADS)

    Nishi, N.; Hamada, A.

    2012-12-01

    Stratiform clouds (nimbostratus and cirriform clouds) in the upper troposphere accompanied with cumulonimbus activity cover large part of the tropical region and largely affect the radiation and water vapor budgets there. Recently new satellites (CloudSat and CALIPSO) can give us the information of cloud height and cloud ice amount even over the open ocean. However, their coverage is limited just below the satellite paths; it is difficult to capture the whole shape and to trace the lifecycle of each cloud system by using just these datasets. We made, as a complementary product, a dataset of cloud top height and visible optical thickness with one-hour resolution over the wide region, by using infrared split-window data of the geostationary satellites (AGU fall meeting 2011) and released on the internet (http://database.rish.kyoto-u.ac.jp/arch/ctop/). We made lookup tables for estimating cloud top height only with geostationary infrared observations by comparing them with the direct cloud observation by CloudSat (Hamada and Nishi, 2010, JAMC). We picked out the same-time observations by MTSAT and CloudSat and regressed the cloud top height observation of CloudSat back onto 11μm brightness temperature (Tb) and the difference between the 11μm Tb and 12μm Tb. We will call our estimated cloud top height as "CTOP" below. The area of our coverage is 85E-155W (MTSAT2) and 80E-160W(MTSAT1R), and 20S-20N. The accuracy of the estimation with the IR split-window observation is the best in the upper tropospheric height range. We analyzed the formation and maintenance of the cloud systems whose top height is in the upper troposphere with our CTOP analysis, CloudSat 2B-GEOPROF, and GSMaP (Global Satellite Mapping of Precipitation) precipitation data. Most of the upper tropospheric stratiform clouds have their cloud top within 13-15 km range. The cloud top height decreases slowly when dissipating but still has high value to the end. However, we sometimes observe that a little lower cloud top height (6-10 km) is kept within one-two days. A typical example is observed on 5 January 2011 in a dissipating cloud system with 1000-km scale. This cluster located between 0-10N just west of the International Date Line and moved westward with keeping relatively lower cloud top (6-10 km) over one day. This top height is lower than the ubiquitous upper-tropospheric stratiform clouds but higher than the so-called 'congestus cloud' whose top height is around 0C. CloudSat data show the presence of convective rainfall. It suggests that this cloud system continuously kept making new anvil clouds in a little lower height than usual. We examined the seasonal variation of the distribution of cloud systems with a little lower cloud top height (6-11 km) during 2010-11. The number of such cloud systems is not constant with seasons but frequently increased in some specific seasons. Over the equatorial ocean region (east of 150E), they were frequently observed during the northern winter.

  15. Improving rainfall estimation from commercial microwave links using METEOSAT SEVIRI cloud cover information

    NASA Astrophysics Data System (ADS)

    Boose, Yvonne; Doumounia, Ali; Chwala, Christian; Moumouni, Sawadogo; Zougmoré, François; Kunstmann, Harald

    2017-04-01

    The number of rain gauges is declining worldwide. A recent promising method for alternative precipitation measurements is to derive rain rates from the attenuation of the microwave signal between remote antennas of mobile phone base stations, so called commercial microwave links (CMLs). In European countries, such as Germany, the CML technique can be used as a complementary method to the existing gauge and radar networks improving their products, for example, in mountainous terrain and urban areas. In West African countries, where a dense gauge or radar network is absent, the number of mobile phone users is rapidly increasing and so are the CML networks. Hence, the CML-derived precipitation measurements have high potential for applications such as flood warning and support of agricultural planning in this region. For typical CML bandwidths (10-40 GHz), the relationship of attenuation to rain rate is quasi-linear. However, also humidity, wet antennas or electronic noise can lead to signal interference. To distinguish these fluctuations from actual attenuation due to rain, a temporal wet (rain event occurred)/ dry (no rain event) classification is usually necessary. In dense CML networks this is possible by correlating neighboring CML time series. Another option is to use the correlation between signal time series of different frequencies or bidirectional signals. The CML network in rural areas is typically not dense enough for correlation analysis and often only one polarization and one frequency are available along a CML. In this work we therefore use cloud cover information derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) radiometer onboard the geostationary satellite METEOSAT for a wet (pixels along link are cloud covered)/ dry (no cloud along link) classification. We compare results for CMLs in Burkina Faso and Germany, which differ meteorologically (rain rate and duration, droplet size distributions) and technically (CML frequencies, lengths, signal level) and use rain gauge data as ground truth for validation.

  16. On the Suitability of Mobile Cloud Computing at the Tactical Edge

    DTIC Science & Technology

    2014-04-23

    geolocation; Facial recognition (photo identification/classification); Intelligence, Surveillance, and Reconnaissance (ISR); and Fusion of Electronic...could benefit most from MCC are those with large processing overhead, low bandwidth requirements, and a need for large database support (e.g., facial ... recognition , language translation). The effect—specifically on the communication links—of supporting these applications at the tactical edge

  17. Remote Sensing of Cloud, Aerosol, and Land Properties from MODIS: Applications to the East Asia Region

    NASA Technical Reports Server (NTRS)

    King, Michael D.; Platnick, Steven; Moody, Eric G.

    2002-01-01

    MODIS is an earth-viewing cross-track scanning spectroradiometer launched on the Terra satellite in December 1999 and the Aqua satellite in May 2002. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from a polar-orbiting, sun-synchronous, platform at an altitude of 705 km, and provides images in 36 spectral bands between 0.415 and 14.235 microns with spatial resolutions of 250 m (2 bands), 500 m (5 bands) and 1000 m (29 bands). These bands have been carefully selected to enable advanced studies of land, ocean, and atmospheric processes. In this paper we will describe the various methods being used for the remote sensing of cloud, aerosol, and surface properties using MODIS data, focusing primarily on (i) the MODIS cloud mask used to distinguish clouds, clear sky, heavy aerosol, and shadows on the ground, (ii) cloud optical properties, especially cloud optical thickness and effective radius of water drops and ice crystals, (iii) aerosol optical thickness and size characteristics both over land and ocean, and (iv) ecosystem classification and surface spectral reflectance. The physical principles behind the determination of each of these products will be described, together with an example of their application using MODIS observations to the east Asian region. All products are archived into two categories: pixel-level retrievals (referred to as Level-2 products) and global gridded products at a latitude and longitude resolution of 1 min (Level-3 products).

  18. Feasibility and demonstration of a cloud-based RIID analysis system

    NASA Astrophysics Data System (ADS)

    Wright, Michael C.; Hertz, Kristin L.; Johnson, William C.; Sword, Eric D.; Younkin, James R.; Sadler, Lorraine E.

    2015-06-01

    A significant limitation in the operational utility of handheld and backpack radioisotope identifiers (RIIDs) is the inability of their onboard algorithms to accurately and reliably identify the isotopic sources of the measured gamma-ray energy spectrum. A possible solution is to move the spectral analysis computations to an external device, the cloud, where significantly greater capabilities are available. The implementation and demonstration of a prototype cloud-based RIID analysis system have shown this type of system to be feasible with currently available communication and computational technology. A system study has shown that the potential user community could derive significant benefits from an appropriately implemented cloud-based analysis system and has identified the design and operational characteristics required by the users and stakeholders for such a system. A general description of the hardware and software necessary to implement reliable cloud-based analysis, the value of the cloud expressed by the user community, and the aspects of the cloud implemented in the demonstrations are discussed.

  19. Enabling Earth Science Through Cloud Computing

    NASA Technical Reports Server (NTRS)

    Hardman, Sean; Riofrio, Andres; Shams, Khawaja; Freeborn, Dana; Springer, Paul; Chafin, Brian

    2012-01-01

    Cloud Computing holds tremendous potential for missions across the National Aeronautics and Space Administration. Several flight missions are already benefiting from an investment in cloud computing for mission critical pipelines and services through faster processing time, higher availability, and drastically lower costs available on cloud systems. However, these processes do not currently extend to general scientific algorithms relevant to earth science missions. The members of the Airborne Cloud Computing Environment task at the Jet Propulsion Laboratory have worked closely with the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to integrate cloud computing into their science data processing pipeline. This paper details the efforts involved in deploying a science data system for the CARVE mission, evaluating and integrating cloud computing solutions with the system and porting their science algorithms for execution in a cloud environment.

  20. Overview of the CERES Edition-4 Multilayer Cloud Property Datasets

    NASA Astrophysics Data System (ADS)

    Chang, F. L.; Minnis, P.; Sun-Mack, S.; Chen, Y.; Smith, R. A.; Brown, R. R.

    2014-12-01

    Knowledge of the cloud vertical distribution is important for understanding the role of clouds on earth's radiation budget and climate change. Since high-level cirrus clouds with low emission temperatures and small optical depths can provide a positive feedback to a climate system and low-level stratus clouds with high emission temperatures and large optical depths can provide a negative feedback effect, the retrieval of multilayer cloud properties using satellite observations, like Terra and Aqua MODIS, is critically important for a variety of cloud and climate applications. For the objective of the Clouds and the Earth's Radiant Energy System (CERES), new algorithms have been developed using Terra and Aqua MODIS data to allow separate retrievals of cirrus and stratus cloud properties when the two dominant cloud types are simultaneously present in a multilayer system. In this paper, we will present an overview of the new CERES Edition-4 multilayer cloud property datasets derived from Terra as well as Aqua. Assessment of the new CERES multilayer cloud datasets will include high-level cirrus and low-level stratus cloud heights, pressures, and temperatures as well as their optical depths, emissivities, and microphysical properties.

  1. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  2. Evaluation of Passive Multilayer Cloud Detection Using Preliminary CloudSat and CALIPSO Cloud Profiles

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.

    2006-12-01

    During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.

  3. Evaluation of Passive Multilayer Cloud Detection Using Preliminary CloudSat and CALIPSO Cloud Profiles

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.

    2005-05-01

    During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.

  4. Application for 3d Scene Understanding in Detecting Discharge of Domesticwaste Along Complex Urban Rivers

    NASA Astrophysics Data System (ADS)

    Ninsalam, Y.; Qin, R.; Rekittke, J.

    2016-06-01

    In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1) a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2) depth for each image is generated through a backward projection of the point clouds; 3) a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D) data; 4) point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5) then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.

  5. Study of the thermodynamic phase of hydrometeors in convective clouds in the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Ferreira, W. C.; Correia, A. L.; Martins, J.

    2012-12-01

    Aerosol-cloud interactions are responsible for large uncertainties in climatic models. One key fator when studying clouds perturbed by aerosols is determining the thermodynamic phase of hydrometeors as a function of temperature or height in the cloud. Conventional remote sensing can provide information on the thermodynamic phase of clouds over large areas, but it lacks the precision needed to understand how a single, real cloud evolves. Here we present mappings of the thermodynamic phase of droplets and ice particles in individual convective clouds in the Amazon Basin, by analyzing the emerging infrared radiance on cloud sides (Martins et al., 2011). In flights over the Amazon Basin with a research aircraft Martins et al. (2011) used imaging radiometers with spectral filters to record the emerging radiance on cloud sides at the wavelengths of 2.10 and 2.25 μm. Due to differential absorption and scattering of these wavelengths by hydrometeors in liquid or solid phases, the intensity ratio between images recorded at the two wavelengths can be used as proxy to the thermodynamic phase of these hydrometeors. In order to analyze the acquired dataset we used the MATLAB tools package, developing scripts to handle data files and derive the thermodynamic phase. In some cases parallax effects due to aircraft movement required additional data processing before calculating ratios. Only well illuminated scenes were considered, i.e. images acquired as close as possible to the backscatter vector from the incident solar radiation. It's important to notice that the intensity ratio values corresponding to a given thermodynamic phase can vary from cloud to cloud (Martins et al., 2011), however inside the same cloud the distinction between ice, water and mixed-phase is clear. Analyzing histograms of reflectance ratios 2.10/2.25 μm in selected cases, we found averages typically between 0.3 and 0.4 for ice phase hydrometeors, and between 0.5 and 0.7 for water phase droplets, consistent with the findings in Martins et al., (2011). Figure 1 shows an example of thermodynamic phase classification obtained with this technique. These experimental results can potentially be used in fast derivations of thermodynamic phase mappings in deep convective clouds, providing useful information for studies regarding aerosol-cloud interactions. Image of the ratio of reflectances at 2.10/2.25μm

  6. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  7. A quality control system for digital elevation data

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Kokkendorf, Simon; Flatman, Andrew; Nielsen, Thorbjørn; Rosenkranz, Brigitte; Keller, Kristian

    2015-04-01

    In connection with the introduction of a new version of the Danish national coverage Digital Elevation Model (DK-DEM), the Danish Geodata Agency has developed a comprehensive quality control (QC) and metadata production (MP) system for LiDAR point cloud data. The architecture of the system reflects its origin in a national mapping organization where raw data deliveries are typically outsourced to external suppliers. It also reflects a design decision of aiming at, whenever conceivable, doing full spatial coverage tests, rather than scattered sample checks. Hence, the QC procedure is split in two phases: A reception phase and an acceptance phase. The primary aim of the reception phase is to do a quick assessment of things that can typically go wrong, and which are relatively simple to check: Data coverage, data density, strip adjustment. If a data delivery passes the reception phase, the QC continues with the acceptance phase, which checks five different aspects of the point cloud data: Vertical accuracy Vertical precision Horizontal accuracy Horizontal precision Point classification correctness The vertical descriptors are comparatively simple to measure: The vertical accuracy is checked by direct comparison with previously surveyed patches. The vertical precision is derived from the observed variance on well defined flat surface patches. These patches are automatically derived from the road centerlines registered in FOT, the official Danish map data base. The horizontal descriptors are less straightforward to measure, since potential reference material for direct comparison is typically expected to be less accurate than the LiDAR data. The solution selected is to compare photogrammetrically derived roof centerlines from FOT with LiDAR derived roof centerlines. These are constructed by taking the 3D Hough transform of a point cloud patch defined by the photogrammetrical roof polygon. The LiDAR derived roof centerline is then the intersection line of the two primary planes of the transformed data. Since the photogrammetrical and the LiDAR derived roof centerline sets are independently derived, a low RMS difference indicates that both data sets are of very high accuracy. The horizontal precision is derived by doing a similar comparison between LiDAR derived roof centerlines in the overlap zone of neighbouring flight strips. Contrary to the vertical and horizontal descriptors, the point classification correctness is neither geometric, nor well defined. In this case we must resolve by introducing a human in the loop and presenting data in a form that is as useful as possible to this human. Hence, the QC system produces maps of suspicious patterns such as Vegetation below buildings Points classified as buildings where no building is registered in the map data base Building polygons from the map data base without any building points Buildings on roads All elements of the QC process is carried out in smaller tiles (typically 1 km × 1 km) and hence trivially parallelizable. Results from the parallel executing processes are collected in a geospatial data base system (PostGIS) and the progress can be analyzed and visualized in a desktop GIS while the processes run. Implementation wise, the system is based on open source components, primarily from the OSGeo stack (GDAL, PostGIS, QGIS, NumPy, SciPy, etc.). The system specific code is also being open sourced. This open source distribution philosophy supports the parallel execution paradigm, since all available hardware can be utilized without any licensing problems. As yet, the system has only been used for QC of the first part of a new Danish elevation model. The experience has, however, been very positive. Especially notable is the utility of doing full spatial coverage tests (rather than scattered sample checks). This means that error detection and error reports are exactly as spatial as the point cloud data they concern. This makes it very easy for both data receiver and data provider, to discuss and reason about the nature and causes of irregularities.

  8. Development the EarthCARE aerosol classification scheme

    NASA Astrophysics Data System (ADS)

    Wandinger, Ulla; Baars, Holger; Hünerbein, Anja; Donovan, Dave; van Zadelhoff, Gerd-Jan; Fischer, Jürgen; von Bismarck, Jonas; Eisinger, Michael; Lajas, Dulce; Wehr, Tobias

    2015-04-01

    The Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) mission is a joint ESA/JAXA mission planned to be launched in 2018. The multi-sensor platform carries a cloud-profiling radar (CPR), a high-spectral-resolution cloud/aerosol lidar (ATLID), a cloud/aerosol multi-spectral imager (MSI), and a three-view broad-band radiometer (BBR). Three out of the four instruments (ATLID, MSI, and BBR) will be able to sense the global aerosol distribution and contribute to the overarching EarthCARE goals of sensor synergy and radiation closure with respect to aerosols. The high-spectral-resolution lidar ATLID obtains profiles of particle extinction and backscatter coefficients, lidar ratio, and linear depolarization ratio as well as the aerosol optical thickness (AOT) at 355 nm. MSI provides AOT at 670 nm (over land and ocean) and 865 nm (over ocean). Next to these primary observables the aerosol type is one of the required products to be derived from both lidar stand-alone and ATLID-MSI synergistic retrievals. ATLID measurements of the aerosol intensive properties (lidar ratio, depolarization ratio) and ATLID-MSI observations of the spectral AOT will provide the basic input for aerosol-type determination. Aerosol typing is needed for the quantification of anthropogenic versus natural aerosol loadings of the atmosphere, the investigation of aerosol-cloud interaction, assimilation purposes, and the validation of atmospheric transport models which carry components like dust, sea salt, smoke and pollution. Furthermore, aerosol classification is a prerequisite for the estimation of direct aerosol radiative forcing and radiative closure studies. With an appropriate underlying microphysical particle description, the categorization of aerosol observations into predefined aerosol types allows us to infer information needed for the calculation of shortwave radiative effects, such as mean particle size, single-scattering albedo, and spectral conversion factors. In order to ensure the consistency of EarthCARE retrievals, to support aerosol description in the EarthCARE simulator ECSIM, and to facilitate a uniform specification of broad-band aerosol optical properties, a hybrid end-to-end aerosol classification model (HETEAC) is developed which serves as a baseline for EarthCARE algorithm development and evaluation procedures. The model's theoretical description of aerosol microphysics (bi-modal size distribution, spectral refractive index, and particle shape distribution) is adjusted to experimental data of aerosol optical properties, i.e. lidar ratio, depolarization ratio, Ångström exponents (hybrid approach). The experimental basis is provided by ground-based observations with sophisticated multi-wavelength, polarization lidars applied in the European Aerosol Research Lidar Network (EARLINET) and in dedicated field campaigns in the Sahara (SAMUM-1), Cape Verde (SAMUM-2), Barbados (SALTRACE), Atlantic Ocean (Polarstern and Meteor cruises), and Amazonia. The model is designed such that it covers the entire loop from aerosol microphysics via aerosol classification to optical and radiative properties of the respective types and allows consistency checks of modeled and measured parameters (end-to-end approach). Optical modeling considers scattering properties of spherical and non-spherical particles. A suitable set of aerosol types is defined which includes dust, clean marine, clean continental, pollution, smoke, and stratospheric aerosol. Mixtures of these types are included as well. The definition is consistent with CALIPSO approaches and will thus enable the establishment of a long-term global four-dimensional aerosol dataset.

  9. Using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  10. Characterization of the cloud conditions at Ny-Ålesund using sensor synergy and representativeness of the observed clouds across Arctic sites

    NASA Astrophysics Data System (ADS)

    Nomokonova, Tatiana; Ebell, Kerstin; Löhnert, Ulrich; Maturilli, Marion

    2017-04-01

    Clouds are one of the crucial components of the hydrological and energy cycles and thus affecting the global climate. Their special importance in Arctic regions is defined by cloud's influence on the radiation budget. Arctic clouds usually occur at low altitudes and often contain highly concentrated tiny liquid drops. During winter, spring, and autumn periods such clouds tend to conserve the long-wave radiation in the atmosphere and, thus, produce warming of the Arctic climate. In summer though clouds efficiently scatter the solar radiation back to space and, therefore, induce a cooling effect. An accurate characterization of the net effect of clouds on the Arctic climate requires long-term and precise observations. However, only a few measurement sites exist which perform continuous, vertically resolved observations of clouds in the Arctic, e.g. in Alaska, Canada, and Greenland. These sites typically make use of a combination of different ground-based remote sensing instruments, e.g. cloud radar, ceilometer and microwave radiometer in order to characterize clouds. Within the Transregional Collaborative Research Center (TR 172) "Arctic Amplification: Climate Relevant Atmospheric and Surface Processes, and Feedback Mechanisms (AC)3" comprehensive observations of the atmospheric column are performed at the German-French Research Station AWIPEV at Ny-Ålesund, Svalbard. Ny-Ålesund is located in the warmest part of the Arctic where climate is significantly influenced by adiabatic heating from the warm ocean. Thus, measurements at Ny-Ålesund will complement our understanding of cloud formation and development in the Arctic. This particular study is devoted to the characterization of the cloud macro- and microphysical properties at Ny-Ålesund and of the atmospheric conditions, under which these clouds form and develop. To this end, the information of the various instrumentation at the AWIPEV observatory is synergistically analysed: information about the thermodynamic structure of the atmosphere is obtained from long-term radiosonde launches. In addition, continuous vertical profiles of temperature and humidity are provided by the microwave radiometer HATPRO. A set of active remote sensing instruments performs cloud observations at Ny-Ålesund: a ceilometer and a Doppler lidar operating since 2011 and 2013, respectively, are now complemented with a novel 94 GHz FMCW cloud radar. As a first step, the CLOUDNET algorithms, including a target categorization and classification, are applied to the observations. In this study, we will present a first analysis of cloud properties at Ny-Ålesund including for example cloud occurrence, cloud geometry (cloud base, cloud top, and thickness) and cloud type (liquid, ice, mixed-phase). The different types of clouds are set into context to the environmental conditions such as temperature, amount of water vapour, and liquid water. We also expect that the cloud properties strongly depend on the wind direction. The first results of this analysis will be also shown.

  11. DD 13 - A very young and heavily reddened early O star in the Large Magellanic Cloud

    NASA Technical Reports Server (NTRS)

    Conti, Peter S.; Fitzpatrick, Edward L.

    1991-01-01

    This paper investigates the Large Magellanic Cloud star DD 13, which is likely the major ionizing source of the nebula N159A. New optical spectroscopy and new estimates of the broadband photometric properties of DD 13 are obtained. A spectral type of O3-O6 V, E(B-V) = 0.64, and M(V) = -6.93 is found. The spectral type cannot be more precisely defined due to contamination of the spectral data by nebular emission, obliterating the important He I classification lines. These results, plus a published estimate of the Lyman continuum photon injection rate into N159A, suggest that DD 13 actually consists of about 2-4 young, early O stars still enshrouded by their natal dust cloud. The star DD 13 may be a younger example of the type of tight cluster represented by the LMC 'star' Sk-66 deg 41, recently revealed to be composed of six or more components.

  12. A Platform for Scalable Satellite and Geospatial Data Analysis

    NASA Astrophysics Data System (ADS)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  13. Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers

    NASA Technical Reports Server (NTRS)

    Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino

    2012-01-01

    Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).

  14. Atmospheric movies acquired at the Mars Science Laboratory landing site: Cloud morphology, frequency and significance to the Gale Crater water cycle and Phoenix mission results

    NASA Astrophysics Data System (ADS)

    Moores, John E.; Lemmon, Mark T.; Rafkin, Scot C. R.; Francis, Raymond; Pla-Garcia, Jorge; de la Torre Juárez, Manuel; Bean, Keri; Kass, David; Haberle, Robert; Newman, Claire; Mischna, Michael; Vasavada, Ashwin; Rennó, Nilton; Bell, Jim; Calef, Fred; Cantor, Bruce; Mcconnochie, Timothy H.; Harri, Ari-Matti; Genzer, Maria; Wong, Michael; Smith, Michael D.; Javier Martín-Torres, F.; Zorzano, María-Paz; Kemppinen, Osku; McCullough, Emily

    2015-05-01

    We report on the first 360 sols (LS 150° to 5°), representing just over half a Martian year, of atmospheric monitoring movies acquired using the NavCam imager from the Mars Science Laboratory (MSL) Rover Curiosity. Such movies reveal faint clouds that are difficult to discern in single images. The data set acquired was divided into two different classifications depending upon the orientation and intent of the observation. Up to sol 360, 73 Zenith movies and 79 Supra-Horizon movies have been acquired and time-variable features could be discerned in 25 of each. The data set from MSL is compared to similar observations made by the Surface Stereo Imager (SSI) onboard the Phoenix Lander and suggests a much drier environment at Gale Crater (4.6°S) during this season than was observed in Green Valley (68.2°N) as would be expected based on latitude and the global water cycle. The optical depth of the variable component of clouds seen in images with features are up to 0.047 ± 0.009 with a granularity to the features observed which averages 3.8°. MCS also observes clouds during the same period of comparable optical depth at 30 and 50 km that would suggest a cloud spacing of 2.0 to 3.3 km. Multiple motions visible in atmospheric movies support the presence of two distinct layers of clouds. At Gale Crater, these clouds are likely caused by atmospheric waves given the regular spacing of features observed in many Zenith movies and decreased spacing towards the horizon in sunset movies consistent with clouds forming at a constant elevation. Reanalysis of Phoenix data in the light of the NavCam equatorial dataset suggests that clouds may have been more frequent in the earlier portion of the Phoenix mission than was previously thought.

  15. Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application

    NASA Astrophysics Data System (ADS)

    Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.

    2013-12-01

    The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.

  16. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  17. Feature detection in satellite images using neural network technology

    NASA Technical Reports Server (NTRS)

    Augusteijn, Marijke F.; Dimalanta, Arturo S.

    1992-01-01

    A feasibility study of automated classification of satellite images is described. Satellite images were characterized by the textures they contain. In particular, the detection of cloud textures was investigated. The method of second-order gray level statistics, using co-occurrence matrices, was applied to extract feature vectors from image segments. Neural network technology was employed to classify these feature vectors. The cascade-correlation architecture was successfully used as a classifier. The use of a Kohonen network was also investigated but this architecture could not reliably classify the feature vectors due to the complicated structure of the classification problem. The best results were obtained when data from different spectral bands were fused.

  18. Introducing two Random Forest based methods for cloud detection in remote sensing images

    NASA Astrophysics Data System (ADS)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.

  19. New Cloud Science from the New ARM Cloud Radar Systems (Invited)

    NASA Astrophysics Data System (ADS)

    Wiscombe, W. J.

    2010-12-01

    The DOE ARM Program is deploying over $30M worth of scanning polarimetric Doppler radars at its four fixed and two mobile sites, with the object of advancing cloud lifecycle science, and cloud-aerosol-precipitation interaction science, by a quantum leap. As of 2011, there will be 13 scanning radar systems to complement its existing array of profiling cloud radars: C-band for precipitation, X-band for drizzle and precipitation, and two-frequency radars for cloud droplets and drizzle. This will make ARM the world’s largest science user of, and largest provider of data from, ground-based cloud radars. The philosophy behind this leap is actually quite simple, to wit: dimensionality really does matter. Just as 2D turbulence is fundamentally different from 3D turbulence, so observing clouds only at zenith provides a dimensionally starved, and sometimes misleading, picture of real clouds. In particular, the zenith view can say little or nothing about cloud lifecycle and the second indirect effect, nor about aerosol-precipitation interactions. It is not even particularly good at retrieving the cloud fraction (no matter how that slippery quantity is defined). This talk will review the history that led to this development and then discuss the aspirations for how this will propel cloud-aerosol-precipitation science forward. The step by step plan for translating raw radar data into information that is useful to cloud and aerosol scientists and climate modelers will be laid out, with examples from ARM’s recent scanning cloud radar deployments in the Azores and Oklahoma . In the end, the new systems should allow cloud systems to be understood as 4D coherent entities rather than dimensionally crippled 2D or 3D entities such as observed by satellites and zenith-pointing radars.

  20. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  1. Study of the Radiative Properties of Inhomogeneous Stratocumulus Clouds

    NASA Technical Reports Server (NTRS)

    Batey, Michael

    1996-01-01

    Clouds play an important role in the radiation budget of the atmosphere. A good understanding of how clouds interact with solar radiation is necessary when considering their effects in both general circulation models and climate models. This study examined the radiative properties of clouds in both an inhomogeneous cloud system, and a simplified cloud system through the use of a Monte Carlo model. The purpose was to become more familiar with the radiative properties of clouds, especially absorption, and to investigate the excess absorption of solar radiation from observations over that calculated from theory. The first cloud system indicated that the absorptance actually decreased as the cloud's inhomogeneity increased, and that cloud forcing does not indicate any changes. The simplified cloud system looked at two different cases of absorption of solar radiation in the cloud. The absorptances calculated from the Monte Carlo is compared to a correction method for calculating absorptances and found that the method can over or underestimate absorptances at cloud edges. Also the cloud edge effects due to solar radiation points to a possibility of overestimating the retrieved optical depth at the edge, and indicates a possible way to correct for it. The effective cloud fraction (Ne) for a long time has been calculated from a cloud's reflectance. From the reflectance it has been observed that the N, for most cloud geometries is greater than the actual cloud fraction (Nc) making a cloud appear wider than it is optically. Recent studies we have performed used a Monte Carlo model to calculate the N, of a cloud using not only the reflectance but also the absorptance. The derived Ne's from the absorptance in some of the Monte Carlo runs did not give the same results as derived from the reflectance. This study also examined the inhomogeneity of clouds to find a relationship between larger and smaller scales, or wavelengths, of the cloud. Both Fourier transforms and wavelet transforms were used to analyze the liquid water content of marine stratocumulus clouds taken during the ASTEX project. From the analysis it was found that the energy in the cloud is not uniformly distributed but is greater at the larger scales than at the smaller scales. This was determined by examining the slope of the power spectrum, and by comparing the variability at two scales from a wavelet analysis.

  2. A cloud system for mobile medical services of traditional Chinese medicine.

    PubMed

    Hu, Nian-Ze; Lee, Chia-Ying; Hou, Mark C; Chen, Ying-Ling

    2013-12-01

    Many medical centers in Taiwan have started to provide Traditional Chinese Medicine (TCM) services for hospitalized patients. Due to the complexity of TCM modality and the increasing need for providing TCM services for patients in different wards at distantly separate locations within the hospital, it is getting difficult to manage the situation in the traditional way. A computerized system with mobile ability can therefore provide a practical solution to the challenge presented. The study tries to develop a cloud system equipped with mobile devices to integrate electronic medical records, facilitate communication between medical workers, and improve the quality of TCM services for the hospitalized patients in a medical center. The system developed in the study includes mobile devices carrying Android operation system and a PC as a cloud server. All the devices use the same TCM management system developed by the study. A website of database is set up for information sharing. The cloud system allows users to access and update patients' medical information, which is of great help to medical workers for verifying patients' identification and giving proper treatments to patients. The information then can be wirelessly transmitted between medical personnel through the cloud system. Several quantitative and qualitative evaluation indexes are developed to measure the effectiveness of the cloud system on the quality of the TCM service. The cloud system is tested and verified based on a sample of hospitalized patients receiving the acupuncture treatment at the Lukang Branch of Changhua Christian Hospital (CCH) in Taiwan. The result shows a great improvement in operating efficiency of the TCM service in that a significant saving in labor time can be attributable to the cloud system. In addition, the cloud system makes it easy to confirm patients' identity through taking a picture of the patient upon receiving any medical treatment. The result also shows that the cloud system achieves significant improvement in the acupuncture treatment. All the acupuncture needles now can be removed at the time they are expected to be removed. Furthermore, through the cloud system, medical workers can access and update patients' medical information on-site, which provides a means of effective communication between medical workers. These functions allow us to make the most use of the portability feature of the acupuncture service. The result shows that the contribution made by the cloud system to the TCM service is multi-dimensional: cost-effective, environment-protective, performance-enhancing etc. Developing and implementing such a cloud system for the TCM service in Taiwan symbolizes a pioneering effort. We believe that the work we have done here can serve as a stepping-stone toward advancing the TCM service quality in the future.

  3. Serving ocean model data on the cloud

    USGS Publications Warehouse

    Meisinger, Michael; Farcas, Claudiu; Farcas, Emilia; Alexander, Charles; Arrott, Matthew; de La Beaujardiere, Jeff; Hubbard, Paul; Mendelssohn, Roy; Signell, Richard P.

    2010-01-01

    The NOAA-led Integrated Ocean Observing System (IOOS) and the NSF-funded Ocean Observatories Initiative Cyberinfrastructure Project (OOI-CI) are collaborating on a prototype data delivery system for numerical model output and other gridded data using cloud computing. The strategy is to take an existing distributed system for delivering gridded data and redeploy on the cloud, making modifications to the system that allow it to harness the scalability of the cloud as well as adding functionality that the scalability affords.

  4. An Automatic Cloud Mask Algorithm Based on Time Series of MODIS Measurements

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Wang, Yujie; Frey, R.

    2008-01-01

    Quality of aerosol retrievals and atmospheric correction depends strongly on accuracy of the cloud mask (CM) algorithm. The heritage CM algorithms developed for AVHRR and MODIS use the latest sensor measurements of spectral reflectance and brightness temperature and perform processing at the pixel level. The algorithms are threshold-based and empirically tuned. They don't explicitly address the classical problem of cloud search, wherein the baseline clear-skies scene is defined for comparison. Here, we report on a new CM algorithm which explicitly builds and maintains a reference clear-skies image of the surface (refcm) using a time series of MODIS measurements. The new algorithm, developed as part of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm for MODIS, relies on fact that clear-skies images of the same surface area have a common textural pattern, defined by the surface topography, boundaries of rivers and lakes, distribution of soils and vegetation etc. This pattern changes slowly given the daily rate of global Earth observations, whereas clouds introduce high-frequency random disturbances. Under clear skies, consecutive gridded images of the same surface area have a high covariance, whereas in presence of clouds covariance is usually low. This idea is central to initialization of refcm which is used to derive cloud mask in combination with spectral and brightness temperature tests. The refcm is continuously updated with the latest clear-skies MODIS measurements, thus adapting to seasonal and rapid surface changes. The algorithm is enhanced by an internal dynamic land-water-snow classification coupled with a surface change mask. An initial comparison shows that the new algorithm offers the potential to perform better than the MODIS MOD35 cloud mask in situations where the land surface is changing rapidly, and over Earth regions covered by snow and ice.

  5. A LiDAR and IMU Integrated Indoor Navigation System for UAVs and Its Application in Real-Time Pipeline Classification

    PubMed Central

    Kumar, G. Ajay; Patil, Ashok Kumar; Patil, Rekha; Park, Seong Sill; Chai, Young Ho

    2017-01-01

    Mapping the environment of a vehicle and localizing a vehicle within that unknown environment are complex issues. Although many approaches based on various types of sensory inputs and computational concepts have been successfully utilized for ground robot localization, there is difficulty in localizing an unmanned aerial vehicle (UAV) due to variation in altitude and motion dynamics. This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors. Considering the advantage of the typical geometric structure of indoor environments, the planar position of UAVs can be efficiently calculated from a point-to-point scan matching algorithm using measurements from a horizontally scanning primary LiDAR. The altitude of the UAV with respect to the floor can be estimated accurately using a vertically scanning secondary LiDAR scanner, which is mounted orthogonally to the primary LiDAR. Furthermore, a Kalman filter is used to derive the 3D position by fusing primary and secondary LiDAR data. Additionally, this work presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach. Classification of the pipeline is based on the pipe radius estimation considering the region of interest (ROI) and the typical angle. The ROI is selected by finding the nearest neighbors of the selected seed point in the pipeline point cloud, and the typical angle is estimated with the directional histogram. Experimental results are provided to determine the feasibility of the proposed navigation system and its integration with real-time application in industrial plant engineering. PMID:28574474

  6. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands

    Treesearch

    T.A. Kennaway; E.H. Helmer; M.A. Lefsky; T.A. Brandeis; K.R. Sherill

    2008-01-01

    Current information on land cover, forest type and forest structure for the Virgin Islands is critical to land managers and researchers for accurate forest inventory and ecological monitoring. In this study, we use cloud free image mosaics of panchromatic sharpened Landsat ETM+ images and decision tree classification software to map land cover and forest type for the...

  7. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands

    Treesearch

    Todd Kennaway; Eileen Helmer; Michael Lefsky; Thomas Brandeis; Kirk Sherrill

    2009-01-01

    Current information on land cover, forest type and forest structure for the Virgin Islands is critical to land managers and researachers for accurate forest inverntory and ecological monitoring. In this study, we use cloud free image mosaics of panchromatic sharpened Landsat ETM+ images and decision tree classification software to map land cover and forest type for the...

  8. Integrated Efforts for Analysis of Geophysical Measurements and Models.

    DTIC Science & Technology

    1997-09-26

    12b. DISTRIBUTION CODE 13. ABSTRACT ( Maximum 200 words) This contract supported investigations of integrated applications of physics, ephemerides...REGIONS AND GPS DATA VALIDATIONS 20 2.5 PL-SCINDA: VISUALIZATION AND ANALYSIS TECHNIQUES 22 2.5.1 View Controls 23 2.5.2 Map Selection...and IR data, about cloudy pixels. Clustering and maximum likelihood classification algorithms categorize up to four cloud layers into stratiform or

  9. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature

    PubMed Central

    Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-01-01

    Background Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. Objective The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. Methods PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET’s phenotype representation with PheKnow-Cloud’s by using PheKnow-Cloud’s experimental setup. In PIVET’s framework, we also introduce a statistical model trained on domain expert–verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. Results PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET’s analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Conclusions Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. PMID:29728351

  10. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.

  11. SSeCloud: Using secret sharing scheme to secure keys

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Huang, Yang; Yang, Disheng; Zhang, Yuzhen; Liu, Hengchang

    2017-08-01

    With the use of cloud storage services, one of the concerns is how to protect sensitive data securely and privately. While users enjoy the convenience of data storage provided by semi-trusted cloud storage providers, they are confronted with all kinds of risks at the same time. In this paper, we present SSeCloud, a secure cloud storage system that improves security and usability by applying secret sharing scheme to secure keys. The system encrypts uploading files on the client side and splits encrypted keys into three shares. Each of them is respectively stored by users, cloud storage providers and the alternative third trusted party. Any two of the parties can reconstruct keys. Evaluation results of prototype system show that SSeCloud provides high security without too much performance penalty.

  12. Cloud and traditional videoconferencing technology for telemedicine and distance learning.

    PubMed

    Liu, Wei-Li; Zhang, Kai; Locatis, Craig; Ackerman, Michael

    2015-05-01

    Cloud-based videoconferencing versus traditional systems are described for possible use in telemedicine and distance learning. Differences between traditional and cloud-based videoconferencing systems are examined, and the methods for identifying and testing systems are explained. Findings are presented characterizing the cloud conferencing genre and its attributes versus traditional H.323 conferencing. Because the technology is rapidly evolving and needs to be evaluated in reference to local needs, it is strongly recommended that this or other reviews not be considered substitutes for personal hands-on experience. This review identifies key attributes of the technology that can be used to appraise the relevance of cloud conferencing technology and to determine whether migration from traditional technology to a cloud environment is warranted. An evaluation template is provided for assessing systems appropriateness.

  13. Daytime Cloud Property Retrievals Over the Arctic from Multispectral MODIS Data

    NASA Technical Reports Server (NTRS)

    Spangenberg, Douglas A.; Trepte, Qing; Minnis, Patrick; Uttal, Taneil

    2004-01-01

    Improving climate model predictions over Earth's polar regions requires a complete understanding of polar clouds properties. Passive satellite remote sensing techniques can be used to retrieve macro and microphysical properties of polar cloud systems. However, over the Arctic, there is minimal contrast between clouds and the background snow surface observed in satellite data, especially for visible wavelengths. This makes it difficult to identify clouds and retrieve their properties from space. Variable snow and ice cover, temperature inversions, and the predominance of mixed-phase clouds further complicate cloud property identification. For this study, the operational Clouds and the Earth s Radiant Energy System (CERES) cloud mask is first used to discriminate clouds from the background surface in Terra Moderate Resolution Imaging Spectroradiometer (MODIS) data. A solar-infrared infrared nearinfrared technique (SINT) first used by Platnick et al. (2001) is used here to retrieve cloud properties over snow and ice covered regions.

  14. Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Shenk, W. E.; Skillman, W.

    1974-01-01

    An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.

  15. Clouds and the Earth's Radiant Energy System (CERES) Algorithm Theoretical Basis Document. Volume 3; Cloud Analyses and Determination of Improved Top of Atmosphere Fluxes (Subsystem 4)

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.

  16. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.

  17. Application of advanced data assimilation techniques to the study of cloud and precipitation feedbacks in the tropical climate system

    NASA Astrophysics Data System (ADS)

    Posselt, Derek J.

    The research documented in this study centers around two topics: evaluation of the response of precipitating cloud systems to changes in the tropical climate system, and assimilation of cloud and precipitation information from remote-sensing platforms. The motivation for this work proceeds from the following outstanding problems: (1) Use of models to study the response of clouds to perturbations in the climate system is hampered by uncertainties in cloud microphysical parameterizations. (2) Though there is an ever-growing set of available observations, cloud and precipitation assimilation remains a difficult problem, particularly in the tropics. (3) Though it is widely acknowledged that cloud and precipitation processes play a key role in regulating the Earth's response to surface warming, the response of the tropical hydrologic cycle to climate perturbations remains largely unknown. The above issues are addressed in the following manner. First, Markov chain Monte Carlo (MCMC) methods are used to quantify the sensitivity of the NASA Goddard Cumulus Ensemble (GCE) cloud resolving model (CRM) to changes in its cloud odcrnpbymiC8l parameters. TRMM retrievals of precipitation rate, cloud properties, and radiative fluxes and heating rates over the South China Sea are then assimilated into the GCE model to constrain cloud microphysical parameters to values characteristic of convection in the tropics, and the resulting observation-constrained model is used to assess the response of the tropical hydrologic cycle to surface warming. The major findings of this study are the following: (1) MCMC provides an effective tool with which to evaluate both model parameterizations and the assumption of Gaussian statistics used in optimal estimation procedures. (2) Statistics of the tropical radiation budget and hydrologic cycle can be used to effectively constrain CRM cloud microphysical parameters. (3) For 2D CRM simulations run with and without shear, the precipitation efficiency of cloud systems increases with increasing sea surface temperature, while the high cloud fraction and outgoing shortwave radiation decrease.

  18. Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Tao, Y.

    2016-12-01

    Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.

  19. Deep Stromvil Photometry for Star Formation in the Head of the Pelican Nebula

    NASA Astrophysics Data System (ADS)

    Boyle, Richard P.; J., S.; Stott, J.; J., S.; Janusz, R.; J., S.; Straizys, V.

    2010-01-01

    The North America and Pelican Nebulae, and specifically the dark cloud L935 contain regions of active star formation (Herbig, G. H. 1958, ApJ, 128,259). Previously we reported on Vatican telescope observations by Stromvil intermediate-band filters in a 12-arcmin field in the "Gulf of Mexico" region of L935. There we classify A, F, and G-type stars. However, the many faint K and M-type dwarf stars remain somewhat ambiguous in calibration and classification. But attaining reasonable progress, we turn to another part of L935 located near the Pelican head. This area includes the "bright rim" which is formed by dust and gas condensed by the light pressure of an unseen O-type star hidden behind the dense dark cloud. Straizys and Laugalys (2008 Baltic Astronomy, 17, 143 ) have identified this star to be one of the 2MASS objects with Av=23 mag. A few concentrations of faint stars, V 13 to 14 mag. are immersed in this dark region. Among these stars are a few known emission-line objects (T-Tauri or post T-Tauri stars). A half degree nearby are some photometric Vilnius standards we use to calibrate our new field. We call on 2MASS data for correlative information. Also the Stromvil photometry offers candidate stars for spectral observations. The aim of this study in the Vilnius and Stromvil photometric systems is to classify stars down to V = 18 mag., to confirm the existence of the young star clusters, and to determine the distance of the cloud covering the suspected hidden ionizing star.

  20. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 4. Systems Analysis and Trade Studies

    DTIC Science & Technology

    1976-03-01

    atmosphere,as well as very fine grid cloud models and cloud probability models. Some of the new requirements that will be supported with this system are a...including the Advanced Prediction Model for the global atmosphere, as well as very fine grid cloud models and cloud proba- bility models. Some of the new...with the mapping and gridding function (imput and output)? Should the capability exist to interface raw ungridded data with the SID interface

  1. Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.

  2. Entrainment and cloud evaporation deduced from the stable isotope chemistry of clouds during ORACLES

    NASA Astrophysics Data System (ADS)

    Noone, D.; Henze, D.; Rainwater, B.; Toohey, D. W.

    2017-12-01

    The magnitude of the influence of biomass burning aerosols on cloud and rain processes is controlled by a series of processes which are difficult to measure directly. A consequence of this limitation is the emergence of significant uncertainty in the representation of cloud-aerosol interactions in models and the resulting cloud radiative forcing. Interaction between cloud and the regional atmosphere causes evaporation, and the rate of evaporation at cloud top is controlled in part by entrainment of air from above which exposes saturated cloud air to drier conditions. Similarly, the size of cloud droplets also controls evaporation rates, which in turn is linked to the abundance of condensation nuclei. To quantify the dependence of cloud properties on biomass burning aerosols the dynamic relationship between evaporation, drop size and entrainment on aerosol state, is evaluated for stratiform clouds in the southeast Atlantic Ocean. These clouds are seasonally exposed to biomass burning plumes from agricultural fires in southern Africa. Measurements of the stable isotope ratios of cloud water and total water are used to deduce the disequilibrium responsible for evaporation within clouds. Disequilibrium is identified by the relationship between hydrogen and oxygen isotope ratios of water vapor and cloud water in and near clouds. To obtain the needed information, a custom-built, dual inlet system was deployed alongside isotopic gas analyzers on the NASA Orion aircraft as part of the Observations of Aerosols above Clouds and their Interactions (ORACLES) campaign. The sampling system obtains both total water and cloud liquid content for the population of droplets above 7 micrometer diameter. The thermodynamic modeling required to convert the observed equilibrium and kinetic isotopic is linked to evaporation and entrainment is described, and the performance of the measurement system is discussed.

  3. Dynamic electronic institutions in agent oriented cloud robotic systems.

    PubMed

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  4. A cloud-based production system for information and service integration: an internet of things case study on waste electronics

    NASA Astrophysics Data System (ADS)

    Wang, Xi Vincent; Wang, Lihui

    2017-08-01

    Cloud computing is the new enabling technology that offers centralised computing, flexible data storage and scalable services. In the manufacturing context, it is possible to utilise the Cloud technology to integrate and provide industrial resources and capabilities in terms of Cloud services. In this paper, a function block-based integration mechanism is developed to connect various types of production resources. A Cloud-based architecture is also deployed to offer a service pool which maintains these resources as production services. The proposed system provides a flexible and integrated information environment for the Cloud-based production system. As a specific type of manufacturing, Waste Electrical and Electronic Equipment (WEEE) remanufacturing experiences difficulties in system integration, information exchange and resource management. In this research, WEEE is selected as the example of Internet of Things to demonstrate how the obstacles and bottlenecks are overcome with the help of Cloud-based informatics approach. In the case studies, the WEEE recycle/recovery capabilities are also integrated and deployed as flexible Cloud services. Supporting mechanisms and technologies are presented and evaluated towards the end of the paper.

  5. Major Characteristics of Southern Ocean Cloud Regimes and Their Effects on the Energy Budget

    NASA Technical Reports Server (NTRS)

    Haynes, John M.; Jakob, Christian; Rossow, William B.; Tselioudis, George; Brown, Josephine

    2011-01-01

    Clouds over the Southern Ocean are often poorly represented by climate models, but they make a significant contribution to the top-of-atmosphere (TOA) radiation balance, particularly in the shortwave portion of the energy spectrum. This study seeks to better quantify the organization and structure of Southern Hemisphere midlatitude clouds by combining measurements from active and passive satellite-based datasets. Geostationary and polar-orbiter satellite data from the International Satellite Cloud Climatology Project (ISCCP) are used to quantify large-scale, recurring modes of cloudiness, and active observations from CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) are used to examine vertical structure, radiative heating rates, and precipitation associated with these clouds. It is found that cloud systems are organized into eight distinct regimes and that ISCCP overestimates the midlevel cloudiness of these regimes. All regimes contain a relatively high occurrence of low cloud, with 79%of all cloud layers observed having tops below 3 km, but multiple-layered clouds systems are present in approximately 34% of observed cloud profiles. The spatial distribution of regimes varies according to season, with cloud systems being geometrically thicker, on average, during the austral winter. Those regimes found to be most closely associated with midlatitude cyclones produce precipitation the most frequently, although drizzle is extremely common in low-cloud regimes. The regimes associated with cyclones have the highest in-regime shortwave cloud radiative effect at the TOA, but the low-cloud regimes, by virtue of their high frequency of occurrence over the oceans, dominate both TOA and surface shortwave effects in this region as a whole.

  6. Atmospheric cloud physics laboratory project study

    NASA Technical Reports Server (NTRS)

    Schultz, W. E.; Stephen, L. A.; Usher, L. H.

    1976-01-01

    Engineering studies were performed for the Zero-G Cloud Physics Experiment liquid cooling and air pressure control systems. A total of four concepts for the liquid cooling system was evaluated, two of which were found to closely approach the systems requirements. Thermal insulation requirements, system hardware, and control sensor locations were established. The reservoir sizes and initial temperatures were defined as well as system power requirements. In the study of the pressure control system, fluid analyses by the Atmospheric Cloud Physics Laboratory were performed to determine flow characteristics of various orifice sizes, vacuum pump adequacy, and control systems performance. System parameters predicted in these analyses as a function of time include the following for various orifice sizes: (1) chamber and vacuum pump mass flow rates, (2) the number of valve openings or closures, (3) the maximum cloud chamber pressure deviation from the allowable, and (4) cloud chamber and accumulator pressure.

  7. Using regime analysis to identify the contribution of clouds to surface temperature errors in weather and climate models

    DOE PAGES

    Van Weverberg, Kwinten; Morcrette, Cyril J.; Ma, Hsi -Yen; ...

    2015-06-17

    Many global circulation models (GCMs) exhibit a persistent bias in the 2 m temperature over the midlatitude continents, present in short-range forecasts as well as long-term climate simulations. A number of hypotheses have been proposed, revolving around deficiencies in the soil–vegetation–atmosphere energy exchange, poorly resolved low-level boundary-layer clouds or misrepresentations of deep-convective storms. A common approach to evaluating model biases focuses on the model-mean state. However, this makes difficult an unambiguous interpretation of the origins of a bias, given that biases are the result of the superposition of impacts of clouds and land-surface deficiencies over multiple time steps. This articlemore » presents a new methodology to objectively detect the role of clouds in the creation of a surface warm bias. A unique feature of this study is its focus on temperature-error growth at the time-step level. It is shown that compositing the temperature-error growth by the coinciding bias in total downwelling radiation provides unambiguous evidence for the role that clouds play in the creation of the surface warm bias during certain portions of the day. Furthermore, the application of an objective cloud-regime classification allows for the detection of the specific cloud regimes that matter most for the creation of the bias. We applied this method to two state-of-the-art GCMs that exhibit a distinct warm bias over the Southern Great Plains of the USA. Our analysis highlights that, in one GCM, biases in deep-convective and low-level clouds contribute most to the temperature-error growth in the afternoon and evening respectively. In the second GCM, deep clouds persist too long in the evening, leading to a growth of the temperature bias. In conclusion, the reduction of the temperature bias in both models in the morning and the growth of the bias in the second GCM in the afternoon could not be assigned to a cloud issue, but are more likely caused by a land-surface deficiency.« less

  8. Cloud and Traditional Videoconferencing Technology for Telemedicine and Distance Learning

    PubMed Central

    Zhang, Kai; Locatis, Craig; Ackerman, Michael

    2015-01-01

    Abstract Introduction: Cloud-based videoconferencing versus traditional systems are described for possible use in telemedicine and distance learning. Materials and Methods: Differences between traditional and cloud-based videoconferencing systems are examined, and the methods for identifying and testing systems are explained. Findings are presented characterizing the cloud conferencing genre and its attributes versus traditional H.323 conferencing. Results: Because the technology is rapidly evolving and needs to be evaluated in reference to local needs, it is strongly recommended that this or other reviews not be considered substitutes for personal hands-on experience. Conclusions: This review identifies key attributes of the technology that can be used to appraise the relevance of cloud conferencing technology and to determine whether migration from traditional technology to a cloud environment is warranted. An evaluation template is provided for assessing systems appropriateness. PMID:25785761

  9. Biotoxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system.

    PubMed

    Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei

    2017-06-01

    A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.

  10. Improving the Accuracy of Cloud Detection Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.

  11. The Q Continuum: Encounter with the Cloud Mask

    NASA Astrophysics Data System (ADS)

    Ackerman, S. A.; Frey, R.; Holz, R.; Philips, C.; Dutcher, S.

    2017-12-01

    We are developing a common cloud mask for MODIS and VIIRS observations, referred to as the MODIS VIIRS Continuity Mask (MVCM). Our focus is on extending the MODIS-heritage cloud detection approach in order to generate appropriate climate data records for clouds and climate studies. The MVCM is based on heritage from the MODIS cloud mask (MOD35 and MYD35) and employs a series of tests on MODIS reflectances and brightness temperatures. Cloud detection is based on contrasts (i.e., cloud versus background surface) at pixel resolution. The MVCM follows the same approach. These cloud masks use multiple cloud detection tests to indicate the confidence level that the observation is of a clear-sky scene. The outcome of a test ranges from 0 (cloudy) to 1 (clear-sky scene). Because of overlap in the sensitivities of the various spectral tests to the type of cloud, each test is considered in one of several groups. The final cloud mask is determined from the product of the minimum confidence of each group and is referred to as the Q value as defined in Ackerman et al (1998). In MOD35 and MYD35 processing, the Q value is not output, rather predetermined Q values determine the result: If Q ≥ .99 the scene is clear; .95 ≤ Q < .99 the pixel is probably a clear scene, .66 ≤ Q < .95 is probably cloudy and Q < .66 is cloudy. Thus representing Q discretely and not as a continuum. For the MVCM, the numerical value of the Q is output along with the classification of clear, probably clear, probably cloudy, and cloudy. Through comparisons with collocated CALIOP and MODIS observations, we will assess the categorization of the Q values as a function of scene type ). While validation studies have indicated the utility and statistical correctness of the cloud mask approach, the algorithm does not possess immeasurable power and perfection. This comparison will assess the time and space dependence of Q and assure that the laws of physics are followed, at least according to normal human notions. Using CALIOP as representing truth, a receiver operating characteristic curve (ROC) will be analyzed to determine the optimum Q for various scenes and seasons, thus providing a continuum of discriminating thresholds.

  12. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    PubMed

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  13. Assessment of Aerosol Distributions from GEOS-5 Using the CALIPSO Feature Mask

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth

    2010-01-01

    A-train sensors such as MODIS, MISR, and CALIPSO are used to determine aerosol properties, and in the process a means of estimating aerosol type (e.g. smoke vs. dust). Correct classification of aerosol type is important for climate assessment, air quality applications, and for comparisons and analysis with aerosol transport models. The Aerosols-Clouds-Ecosystems (ACE) satellite mission proposed in the NRC Decadal Survey describes a next generation aerosol and cloud suite similar to the current A-train, including a lidar. The future ACE lidar must be able to determine aerosol type effectively in conjunction with modeling activities to achieve ACE objectives. Here we examine the current capabilities of CALIPSO and the NASA Goddard Earth Observing System general circulation model and data assimilation system (GEOS-5), to place future ACE needs in context. The CALIPSO level 2 feature mask includes vertical profiles of aerosol layers classified by type. GEOS-5 provides global 3D aerosol mass for sulfate, sea salt, dust, and black and organic carbon. A GEOS aerosol scene classification algorithm has been developed to provide estimates of aerosol mixtures and extinction profiles along the CALIPSO orbit track. In previous work, initial comparisons between GEOS-5 derived aerosol mixtures and CALIPSO derived aerosol types were presented for July 2007. In general, the results showed that model and lidar derived aerosol types did not agree well in the boundary layer. Agreement was poor over Europe, where CALIPSO indicated the presence of dust and pollution mixtures yet GEOS-5 was dominated by pollution with little dust. Over the ocean in the tropics, the model appeared to contain less sea salt than detected by CALIPSO, yet at high latitudes the situation was reserved. Agreement between CALIPSO and GEOS-5, aerosol types improved above the boundary layer, primarily in dust and smoke dominated regions. At higher altitudes (> 5 km), the model contained aerosol layers not detected by CALIPSO. Here we present new results for a full year study using the new Version 3 CALIPSO data and most recent GEOS-5 model results.

  14. Cloud manufacturing: from concept to practice

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; Zhao, Chun; Chai, Xudong; Zhao, Xinpei

    2015-02-01

    The concept of cloud manufacturing is emerging as a new promising manufacturing paradigm, as well as a business model, which is reshaping the service-oriented, highly collaborative, knowledge-intensive and eco-efficient manufacturing industry. However, the basic concepts about cloud manufacturing are still in discussion. Both academia and industry will need to have a commonly accepted definition of cloud manufacturing, as well as further guidance and recommendations on how to develop and implement cloud manufacturing. In this paper, we review some of the research work and clarify some fundamental terminologies in this field. Further, we developed a cloud manufacturing systems which may serve as an application example. From a systematic and practical perspective, the key requirements of cloud manufacturing platforms are investigated, and then we propose a cloud manufacturing platform prototype, MfgCloud. Finally, a public cloud manufacturing system for small- and medium-sized enterprises (SME) is presented. This paper presents a new perspective for cloud manufacturing, as well as a cloud-to-ground solution. The integrated solution proposed in this paper, including the terminology, MfgCloud, and applications, can push forward this new paradigm from concept to practice.

  15. Application of the CloudSat and NEXRAD Radars Toward Improvements in High Resolution Operational Forecasts

    NASA Technical Reports Server (NTRS)

    Molthan, A. L.; Haynes, J. A.; Case, J. L.; Jedlovec, G. L.; Lapenta, W. M.

    2008-01-01

    As computational power increases, operational forecast models are performing simulations with higher spatial resolution allowing for the transition from sub-grid scale cloud parameterizations to an explicit forecast of cloud characteristics and precipitation through the use of single- or multi-moment bulk water microphysics schemes. investments in space-borne and terrestrial remote sensing have developed the NASA CloudSat Cloud Profiling Radar and the NOAA National Weather Service NEXRAD system, each providing observations related to the bulk properties of clouds and precipitation through measurements of reflectivity. CloudSat and NEXRAD system radars observed light to moderate snowfall in association with a cold-season, midlatitude cyclone traversing the Central United States in February 2007. These systems are responsible for widespread cloud cover and various types of precipitation, are of economic consequence, and pose a challenge to operational forecasters. This event is simulated with the Weather Research and Forecast (WRF) Model, utilizing the NASA Goddard Cumulus Ensemble microphysics scheme. Comparisons are made between WRF-simulated and observed reflectivity available from the CloudSat and NEXRAD systems. The application of CloudSat reflectivity is made possible through the QuickBeam radiative transfer model, with cautious application applied in light of single scattering characteristics and spherical target assumptions. Significant differences are noted within modeled and observed cloud profiles, based upon simulated reflectivity, and modifications to the single-moment scheme are tested through a supplemental WRF forecast that incorporates a temperature dependent snow crystal size distribution.

  16. Effects of interplanetary magnetic clouds, interaction regions, and high-speed streams on the transient modulation of galactic cosmic rays

    NASA Astrophysics Data System (ADS)

    Singh, Y. P.; Badruddin

    2007-02-01

    Interplanetary manifestations of coronal mass ejections (CMEs) with specific plasma and field properties, called ``interplanetary magnetic clouds,'' have been observed in the heliosphere since the mid-1960s. Depending on their associated features, a set of observed magnetic clouds identified at 1 AU were grouped in four different classes using data over 4 decades: (1) interplanetary magnetic clouds moving with the ambient solar wind (MC structure), (2) magnetic clouds moving faster than the ambient solar wind and forming a shock/sheath structure of compressed plasma and field ahead of it (SMC structure), (3) magnetic clouds ``pushed'' by the high-speed streams from behind, forming an interaction region between the two (MIH structure), and (4) shock-associated magnetic clouds followed by high-speed streams (SMH structure). This classification into different groups led us to study the role, effect, and the relative importance of (1) closed field magnetic cloud structure with low field variance, (2) interplanetary shock and magnetically turbulent sheath region, (3) interaction region with large field variance, and (4) the high-speed solar wind stream coming from the open field regions, in modulating the galactic cosmic rays (GCRs). MC structures are responsible for transient decrease with fast recovery. SMC structures are responsible for fast decrease and slow recovery, MIH structures produce depression with slow decrease and slow recovery, and SMH structures are responsible for fast decrease with very slow recovery. Simultaneous variations of GCR intensity, solar plasma velocity, interplanetary magnetic field strength, and its variance led us to study the relative effectiveness of different structures as well as interplanetary plasma/field parameters. Possible role of the magnetic field, its topology, field turbulence, and the high-speed streams in influencing the amplitude and time profile of resulting decreases in GCR intensity have also been discussed.

  17. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  18. Landsat Thematic Mapper Image Mosaic of Colorado

    USGS Publications Warehouse

    Cole, Christopher J.; Noble, Suzanne M.; Blauer, Steven L.; Friesen, Beverly A.; Bauer, Mark A.

    2010-01-01

    The U.S. Geological Survey (USGS) Rocky Mountain Geographic Science Center (RMGSC) produced a seamless, cloud-minimized remotely-sensed image spanning the State of Colorado. Multiple orthorectified Landsat 5 Thematic Mapper (TM) scenes collected during 2006-2008 were spectrally normalized via reflectance transformation and linear regression based upon pseudo-invariant features (PIFS) following the removal of clouds. Individual Landsat scenes were then mosaicked to form a six-band image composite spanning the visible to shortwave infrared spectrum. This image mosaic, presented here, will also be used to create a conifer health classification for Colorado in Scientific Investigations Map 3103. An archive of past and current Landsat imagery exists and is available to the scientific community (http://glovis.usgs.gov/), but significant pre-processing was required to produce a statewide mosaic from this information. Much of the data contained perennial cloud cover that complicated analysis and classification efforts. Existing Landsat mosaic products, typically three band image composites, did not include the full suite of multispectral information necessary to produce this assessment, and were derived using data collected in 2001 or earlier. A six-band image mosaic covering Colorado was produced. This mosaic includes blue (band 1), green (band 2), red (band 3), near infrared (band 4), and shortwave infrared information (bands 5 and 7). The image composite shown here displays three of the Landsat bands (7, 4, and 2), which are sensitive to the shortwave infrared, near infrared, and green ranges of the electromagnetic spectrum. Vegetation appears green in this image, while water looks black, and unforested areas appear pink. The lines that may be visible in the on-screen version of the PDF are an artifact of the export methods used to create this file. The file should be viewed at 150 percent zoom or greater for optimum viewing.

  19. Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.

    2017-09-01

    The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16  points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.

  20. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  1. Mobile healthcare information management utilizing Cloud Computing and Android OS.

    PubMed

    Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias

    2010-01-01

    Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.

  2. Cloud cover analysis associated to cut-off low-pressure systems over Europe using Meteosat Imagery

    NASA Astrophysics Data System (ADS)

    Delgado, G.; Redaño, A.; Lorente, J.; Nieto, R.; Gimeno, L.; Ribera, P.; Barriopedro, D.; García-Herrera, R.; Serrano, A.

    2007-04-01

    This paper reports a cloud cover analysis of cut-off low pressure systems (COL) using a pattern recognition method applied to IR and VIS bispectral histograms. 35 COL occurrences were studied over five years (1994-1998). Five cloud types were identified in COLs, of which high clouds (HCC) and deep convective clouds (DCC) were found to be the most relevant to characterize COL systems, though not the most numerous. Cloud cover in a COL is highly dependent on its stage of development, but a higher percentage of cloud cover is always present in the frontal zone, attributable due to higher amounts of high and deep convective clouds. These general characteristics are most marked during the first stage (when the amplitude of the geopotencial wave increases) and second stage (characterized by the development of a cold upper level low), closed cyclonic circulation minimizing differences between rearward and frontal zones during the third stage. The probability of heavy rains during this stage decreases considerably. The centres of mass of high and deep convective clouds move towards the COL-axis centre during COL evolution.

  3. Variable Stars in Large Magellanic Cloud Globular Clusters. II. NGC 1786

    NASA Astrophysics Data System (ADS)

    Kuehn, Charles A.; Smith, Horace A.; Catelan, Márcio; Pritzl, Barton J.; De Lee, Nathan; Borissova, Jura

    2012-12-01

    This is the second in a series of papers studying the variable stars in Large Magellanic Cloud globular clusters. The primary goal of this series is to study how RR Lyrae stars in Oosterhoff-intermediate systems compare to their counterparts in Oosterhoff I/II systems. In this paper, we present the results of our new time-series B-V photometric study of the globular cluster NGC 1786. A total of 65 variable stars were identified in our field of view. These variables include 53 RR Lyraes (27 RRab, 18 RRc, and 8 RRd), 3 classical Cepheids, 1 Type II Cepheid, 1 Anomalous Cepheid, 2 eclipsing binaries, 3 Delta Scuti/SX Phoenicis variables, and 2 variables of undetermined type. Photometric parameters for these variables are presented. We present physical properties for some of the RR Lyrae stars, derived from Fourier analysis of their light curves. We discuss several different indicators of Oosterhoff type which indicate that the Oosterhoff classification of NGC 1786 is not as clear cut as what is seen in most globular clusters. Based on observations taken with the SMARTS 1.3 m telescope operated by the SMARTS Consortium and observations taken at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  4. Deployment of the third-generation infrared cloud imager: A two-year study of Arctic clouds at Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Nugent, Paul Winston

    Cloud cover is an important but poorly understood component of current climate models, and although climate change is most easily observed in the Arctic, cloud data in the Arctic is unreliable or simply unavailable. Ground-based infrared cloud imaging has the potential to fill this gap. This technique uses a thermal infrared camera to observe cloud amount, cloud optical depth, and cloud spatial distribution at a particular location. The Montana State University Optical Remote Sensor Laboratory has developed the ground-based Infrared Cloud Imager (ICI) instrument to measure spatial and temporal cloud data. To build an ICI for Arctic sites required the system to be engineered to overcome the challenges of this environment. Of particular challenge was keeping the system calibration and data processing accurate through the severe temperature changes. Another significant challenge was that weak emission from the cold, dry Arctic atmosphere pushed the camera used in the instrument to its operational limits. To gain an understanding of the operation of the ICI systems for the Arctic and to gather critical data on Arctic clouds, a prototype arctic ICI was deployed in Barrow, AK from July 2012 through July 2014. To understand the long-term operation of an ICI in the arctic, a study was conducted of the ICI system accuracy in relation to co-located active and passive sensors. Understanding the operation of this system in the Arctic environment required careful characterization of the full optical system, including the lens, filter, and detector. Alternative data processing techniques using decision trees and support vector machines were studied to improve data accuracy and reduce dependence on auxiliary instrument data and the resulting accuracy is reported here. The work described in this project was part of the effort to develop a fourth-generation ICI ready to be deployed in the Arctic. This system will serve a critical role in developing our understanding of cloud cover in the Arctic, an important but poorly understood region of the world.

  5. The Effect of Environmental Conditions on Tropical Deep Convective Systems Observed from the TRMM Satellite

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Wielicki, Bruce A.; Minnis, Patrick; Chambers, Lin H.; Xu, Kuan-Man; Hu, Yongxiang; Fan, Tai-Fang

    2005-01-01

    This study uses measurements of radiation and cloud properties taken between January and August 1998 by three Tropical Rainfall Measuring Mission (TRMM) instruments, the Clouds and the Earth's Radiant Energy System (CERES) scanner, the TRMM Microwave Imager (TMI), and the Visible and InfraRed Scanner (VIRS), to evaluate the variations of tropical deep convective systems (DCS) with sea surface temperature (SST) and precipitation. This study finds that DCS precipitation efficiency increases with SST at a rate of approx. 2%/K. Despite increasing rainfall efficiency, the cloud areal coverage rises with SST at a rate of about 7%/K in the warm tropical seas. There, the boundary layer moisture supply for deep convection and the moisture transported to the upper troposphere for cirrus-anvil cloud formation increase by approx. 6.3%/K and approx. 4.0%/K, respectively. The changes in cloud formation efficiency, along with the increased transport of moisture available for cloud formation, likely contribute to the large rate of increasing DCS areal coverage. Although no direct observations are available, the increase of cloud formation efficiency with rising SST is deduced indirectly from measurements of changes in the ratio of DCS ice water path and boundary layer water vapor amount with SST. Besides the cloud areal coverage, DCS cluster effective sizes also increase with precipitation. Furthermore, other cloud properties, such as cloud total water and ice water paths, increase with SST. These changes in DCS properties will produce a negative radiative feedback for the earth's climate system due to strong reflection of shortwave radiation by the DCS. These results significantly differ from some previous hypothesized dehydration scenarios for warmer climates, and have great potential in testing current cloud-system resolving models and convective parameterizations of general circulation models.

  6. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  7. Recent Observations of Clouds and Precipitation by the Airborne Precipitation Radar 2nd Generation in Support of the GPM and ACE Missions

    NASA Technical Reports Server (NTRS)

    Durden, Stephen L.; Tanelli, Simone; Im, Eastwood

    2012-01-01

    In this paper we illustrate the unique dataset collected during the Global Precipitation Measurement Cold-season Precipitation Experiment (GCPEx, US/Canada Jan/Feb 2012). We will focus on the significance of these observations for the development of algorithms for GPM and ACE, with particular attention to classification and retrievals of frozen and mixed phase hydrometeors.

  8. Identification of the Rice Wines with Different Marked Ages by Electronic Nose Coupled with Smartphone and Cloud Storage Platform

    PubMed Central

    Wei, Zhebo; Xiao, Xize

    2017-01-01

    In this study, a portable electronic nose (E-nose) was self-developed to identify rice wines with different marked ages—all the operations of the E-nose were controlled by a special Smartphone Application. The sensor array of the E-nose was comprised of 12 MOS sensors and the obtained response values were transmitted to the Smartphone thorough a wireless communication module. Then, Aliyun worked as a cloud storage platform for the storage of responses and identification models. The measurement of the E-nose was composed of the taste information obtained phase (TIOP) and the aftertaste information obtained phase (AIOP). The area feature data obtained from the TIOP and the feature data obtained from the TIOP-AIOP were applied to identify rice wines by using pattern recognition methods. Principal component analysis (PCA), locally linear embedding (LLE) and linear discriminant analysis (LDA) were applied for the classification of those wine samples. LDA based on the area feature data obtained from the TIOP-AIOP proved a powerful tool and showed the best classification results. Partial least-squares regression (PLSR) and support vector machine (SVM) were applied for the predictions of marked ages and SVM (R2 = 0.9942) worked much better than PLSR. PMID:29088076

  9. Identification of the Rice Wines with Different Marked Ages by Electronic Nose Coupled with Smartphone and Cloud Storage Platform.

    PubMed

    Wei, Zhebo; Xiao, Xize; Wang, Jun; Wang, Hui

    2017-10-31

    In this study, a portable electronic nose (E-nose) was self-developed to identify rice wines with different marked ages-all the operations of the E-nose were controlled by a special Smartphone Application. The sensor array of the E-nose was comprised of 12 MOS sensors and the obtained response values were transmitted to the Smartphone thorough a wireless communication module. Then, Aliyun worked as a cloud storage platform for the storage of responses and identification models. The measurement of the E-nose was composed of the taste information obtained phase (TIOP) and the aftertaste information obtained phase (AIOP). The area feature data obtained from the TIOP and the feature data obtained from the TIOP-AIOP were applied to identify rice wines by using pattern recognition methods. Principal component analysis (PCA), locally linear embedding (LLE) and linear discriminant analysis (LDA) were applied for the classification of those wine samples. LDA based on the area feature data obtained from the TIOP-AIOP proved a powerful tool and showed the best classification results. Partial least-squares regression (PLSR) and support vector machine (SVM) were applied for the predictions of marked ages and SVM (R² = 0.9942) worked much better than PLSR.

  10. Study and simulation results for video landmark acquisition and tracking technology (Vilat-2)

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Tietz, J. C.; Thomas, H. M.; Gremban, K. D.; Hughes, C.; Chang, C. Y.

    1983-01-01

    The results of several investigations and hardware developments which supported new technology for Earth feature recognition and classification are described. Data analysis techniques and procedures were developed for processing the Feature Identification and Location Experiment (FILE) data. This experiment was flown in November 1981, on the second Shuttle flight and a second instrument, designed for aircraft flights, was flown over the United States in 1981. Ground tests were performed to provide the basis for designing a more advanced version (four spectral bands) of the FILE which would be capable of classifying clouds and snow (and possibly ice) as distinct features, in addition to the features classified in the Shuttle experiment (two spectral bands). The Shuttle instrument classifies water, bare land, vegetation, and clouds/snow/ice (grouped).

  11. The early-type strong emission-line supergiants of the Magellanic Clouds - A spectroscopic zoology

    NASA Technical Reports Server (NTRS)

    Shore, S. N.; Sanduleak, N.

    1984-01-01

    The results of a spectroscopic survey of 21 early-type extreme emission line supergiants of the Large and Small Magellanic Clouds using IUE and optical spectra are presented. The combined observations are discussed and the literature on each star in the sample is summarized. The classification procedures and the methods by which effective temperatures, bolometric magnitudes, and reddenings were assigned are discussed. The derived reddening values are given along with some results concerning anomalous reddening among the sample stars. The derived mass, luminosity, and radius for each star are presented, and the ultraviolet emission lines are described. Mass-loss rates are derived and discussed, and the implications of these observations for the evolution of the most massive stars in the Local Group are addressed.

  12. Results from the Two-Year Infrared Cloud Imager Deployment at ARM's NSA Observatory in Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Shaw, J. A.; Nugent, P. W.

    2016-12-01

    Ground-based longwave-infrared (LWIR) cloud imaging can provide continuous cloud measurements in the Arctic. This is of particular importance during the Arctic winter when visible wavelength cloud imaging systems cannot operate. This method uses a thermal infrared camera to observe clouds and produce measurements of cloud amount and cloud optical depth. The Montana State University Optical Remote Sensor Laboratory deployed an infrared cloud imager (ICI) at the Atmospheric Radiation Monitoring North Slope of Alaska site at Barrow, AK from July 2012 through July 2014. This study was used to both understand the long-term operation of an ICI in the Arctic and to study the consistency of the ICI data products in relation to co-located active and passive sensors. The ICI was found to have a high correlation (> 0.92) with collocated cloud instruments and to produce an unbiased data product. However, the ICI also detects thin clouds that are not detected by most operational cloud sensors. Comparisons with high-sensitivity actively sensed cloud products confirm the existence of these thin clouds. Infrared cloud imaging systems can serve a critical role in developing our understanding of cloud cover in the Arctic by provided a continuous annual measurement of clouds at sites of interest.

  13. The cloud-phase feedback in the Super-parameterized Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Burt, M. A.; Randall, D. A.

    2016-12-01

    Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.

  14. Machine Learing Applications on a Radar Wind Profiler Deployment During the ARM GoAmazon2014/5 Campaign

    NASA Astrophysics Data System (ADS)

    Giangrande, S. E.; WANG, D.; Hardin, J. C.; Mitchell, J.

    2017-12-01

    As part of the 2 year Department of Energy Atmospheric Radiation Measurement (ARM) Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) campaign, the ARM Mobile Facility (AMF) collected a unique set of observations in a region of strong climatic significance near Manacapuru, Brazil. An important example for the beneficial observational record obtained by ARM during this campaign was that of the Radar Wind Profiler (RWP). This dataset has been previously documented for providing critical convective cloud vertical air velocity retrievals and precipitation properties (e.g., calibrated reflectivity factor Z, rainfall rates) under a wide variety of atmospheric conditions. Vertical air motion estimates to within deep convective cores such as those available from this RWP system have been previously identified as critical constraints for ongoing global climate modeling activities and deep convective cloud process studies. As an extended deployment within this `green ocean' region, the RWP site and collocated AMF surface gauge instrumentation experienced a unique hybrid of tropical and continental precipitation conditions, including multiple wet and dry season precipitation regimes, convective and organized stratiform storm dynamics and contributions to rainfall accumulation, pristine aerosol conditions of the locale, as well as the effects of the Manaus, Brazil, mega city pollution plume. For hydrological applications and potential ARM products, machine learning methods developed using this dataset are explored to demonstrate advantages in geophysical retrievals when compared to traditional methods. Emphasis is on performance improvements when providing additional information on storm structure and regime or echo type classifications. Since deep convective cloud dynamic insights (core updraft/downdraft properties) are difficult to obtain directly by conventional radars that also observe radar reflectivity factor profiles similar to RWP systems, we also consider possible machine learning applications to inform on (statistical) proxy convective relationships between observed convective core dynamics and radar microphysical properties that are otherwise not easily related by clear physical process paths using existing radar networks.

  15. Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653

  16. Simulations of Infrared Radiances Over a Deep Convective Cloud System Observed During TC4: Potential for Enhancing Nocturnal Ice Cloud Retrievals

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Hong, Gang; Ayers, Kirk; Smith, William L., Jr.; Yost, Christopher R.; Heymsfield, Andrew J.; Heymsfield, Gerald M.; Hlavka, Dennis L.; King, Michael D.; Korn, Errol; hide

    2012-01-01

    Retrievals of ice cloud properties using infrared measurements at 3.7, 6.7, 7.3, 8.5, 10.8, and 12.0 microns can provide consistent results regardless of solar illumination, but are limited to cloud optical thicknesses tau < approx.6. This paper investigates the variations in radiances at these wavelengths over a deep convective cloud system for their potential to extend retrievals of tau and ice particle size D(sub e) to optically thick clouds. Measurements from the Moderate Resolution Imaging Spectroradiometer Airborne Simulator--ASTER, the Scanning High-resolution Interferometer Sounder, the Cloud Physics Lidar (CPL), and the Cloud Radar System (CRS) aboard the NASA ER-2 aircraft during the NASA TC4 (Tropical Composition, Cloud and Climate Coupling) experiment flight during 5 August 2007, are used to examine the retrieval capabilities of infrared radiances over optically thick ice clouds. Simulations based on coincident in-situ measurements and combined cloud tau from CRS and CPL measurements are comparable to the observations. They reveal that brightness temperatures at these bands and their differences (BTD) are sensitive to tau up to approx.20 and that for ice clouds having tau > 20, the 3.7 - 10.8 microns and 3.7 - 6.7 microns BTDs are the most sensitive to D(sub e). Satellite imagery appears consistent with these results. Keywords: clouds; optical depth; particle size; satellite; TC4; multispectral thermal infrared

  17. Simulations of Infrared Radiances Over a Deep Convective Cloud System Observed During TC4- Potential for Enhancing Nocturnal Ice Cloud Retrievals

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Hong, Gang; Ayers, Jeffrey Kirk; Smith, William L.; Yost, Christopher R.; Heymsfield, Andrew J.; Heymsfield, Gerald M.; Hlavka, Dennis L.; King, Michael D.; Korn, Errol M.; hide

    2012-01-01

    Retrievals of ice cloud properties using infrared measurements at 3.7, 6.7, 7.3, 8.5, 10.8, and 12.0 microns can provide consistent results regardless of solar illumination, but are limited to cloud optical thicknesses tau < approx.6. This paper investigates the variations in radiances at these wavelengths over a deep convective cloud system for their potential to extend retrievals of tau and ice particle size D(sub e) to optically thick clouds. Measurements from the Moderate Resolution Imaging Spectroradiometer Airborne Simulator--ASTER, the Scanning High-resolution Interferometer Sounder, the Cloud Physics Lidar (CPL), and the Cloud Radar System (CRS) aboard the NASA ER-2 aircraft during the NASA TC4 (Tropical Composition, Cloud and Climate Coupling) experiment flight during 5 August 2007, are used to examine the retrieval capabilities of infrared radiances over optically thick ice clouds. Simulations based on coincident in-situ measurements and combined cloud tau from CRS and CPL measurements are comparable to the observations. They reveal that brightness temperatures at these bands and their differences (BTD) are sensitive to tau up to approx.20 and that for ice clouds having tau > 20, the 3.7 - 10.8 microns and 3.7 - 6.7 microns BTDs are the most sensitive to D(sub e). Satellite imagery appears consistent with these results. Keywords: clouds; optical depth; particle size; satellite; TC4; multispectral thermal infrared

  18. Semantic Segmentation of Indoor Point Clouds Using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Babacan, K.; Chen, L.; Sohn, G.

    2017-11-01

    As Building Information Modelling (BIM) thrives, geometry becomes no longer sufficient; an ever increasing variety of semantic information is needed to express an indoor model adequately. On the other hand, for the existing buildings, automatically generating semantically enriched BIM from point cloud data is in its infancy. The previous research to enhance the semantic content rely on frameworks in which some specific rules and/or features that are hand coded by specialists. These methods immanently lack generalization and easily break in different circumstances. On this account, a generalized framework is urgently needed to automatically and accurately generate semantic information. Therefore we propose to employ deep learning techniques for the semantic segmentation of point clouds into meaningful parts. More specifically, we build a volumetric data representation in order to efficiently generate the high number of training samples needed to initiate a convolutional neural network architecture. The feedforward propagation is used in such a way to perform the classification in voxel level for achieving semantic segmentation. The method is tested both for a mobile laser scanner point cloud, and a larger scale synthetically generated data. We also demonstrate a case study, in which our method can be effectively used to leverage the extraction of planar surfaces in challenging cluttered indoor environments.

  19. Health Information System in a Cloud Computing Context.

    PubMed

    Sadoughi, Farahnaz; Erfannia, Leila

    2017-01-01

    Healthcare as a worldwide industry is experiencing a period of growth based on health information technology. The capabilities of cloud systems make it as an option to develop eHealth goals. The main objectives of the present study was to evaluate the advantages and limitations of health information systems implementation in a cloud-computing context that was conducted as a systematic review in 2016. Science direct, Scopus, Web of science, IEEE, PubMed and Google scholar were searched according study criteria. Among 308 articles initially found, 21 articles were entered in the final analysis. All the studies had considered cloud computing as a positive tool to help advance health technology, but none had insisted too much on its limitations and threats. Electronic health record systems have been mostly studied in the fields of implementation, designing, and presentation of models and prototypes. According to this research, the main advantages of cloud-based health information systems could be categorized into the following groups: economic benefits and advantages of information management. The main limitations of the implementation of cloud-based health information systems could be categorized into the 4 groups of security, legal, technical, and human restrictions. Compared to earlier studies, the present research had the advantage of dealing with the issue of health information systems in a cloud platform. The high frequency of studies conducted on the implementation of cloud-based health information systems revealed health industry interest in the application of this technology. Security was a subject discussed in most studies due to health information sensitivity. In this investigation, some mechanisms and solutions were discussed concerning the mentioned systems, which would provide a suitable area for future scientific research on this issue. The limitations and solutions discussed in this systematic study would help healthcare managers and decision-makers take better and more efficient advantages of this technology and make better planning to adopt cloud-based health information systems.

  20. Ice Cloud Properties in Ice-Over-Water Cloud Systems Using TRMM VIRS and TMI Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Huang, Jianping; Lin, Bing; Yi, Yuhong; Arduini, Robert F.; Fan, Tai-Fang; Ayers, J. Kirk; Mace, Gerald G.

    2007-01-01

    A multi-layered cloud retrieval system (MCRS) is updated and used to estimate ice water path in maritime ice-over-water clouds using Visible and Infrared Scanner (VIRS) and TRMM Microwave Imager (TMI) measurements from the Tropical Rainfall Measuring Mission spacecraft between January and August 1998. Lookup tables of top-of-atmosphere 0.65- m reflectance are developed for ice-over-water cloud systems using radiative transfer calculations with various combinations of ice-over-water cloud layers. The liquid and ice water paths, LWP and IWP, respectively, are determined with the MCRS using these lookup tables with a combination of microwave (MW), visible (VIS), and infrared (IR) data. LWP, determined directly from the TMI MW data, is used to define the lower-level cloud properties to select the proper lookup table. The properties of the upper-level ice clouds, such as optical depth and effective size, are then derived using the Visible Infrared Solar-infrared Split-window Technique (VISST), which matches the VIRS IR, 3.9- m, and VIS data to the multilayer-cloud lookup table reflectances and a set of emittance parameterizations. Initial comparisons with surface-based radar retrievals suggest that this enhanced MCRS can significantly improve the accuracy and decrease the IWP in overlapped clouds by 42% and 13% compared to using the single-layer VISST and an earlier simplified MW-VIS-IR (MVI) differencing method, respectively, for ice-over-water cloud systems. The tropical distribution of ice-over-water clouds is the same as derived earlier from combined TMI and VIRS data, but the new values of IWP and optical depth are slightly larger than the older MVI values, and exceed those of single-layered layered clouds by 7% and 11%, respectively. The mean IWP from the MCRS is 8-14% greater than that retrieved from radar retrievals of overlapped clouds over two surface sites and the standard deviations of the differences are similar to those for single-layered clouds. Examples of a method for applying the MCRS over land without microwave data yield similar differences with the surface retrievals. By combining the MCRS with other techniques that focus primarily on optically thin cirrus over low water clouds, it will be possible to more fully assess the IWP in all conditions over ocean except for precipitating systems.

Top