An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data
NASA Technical Reports Server (NTRS)
Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.
2003-01-01
An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.
Automated, per pixel Cloud Detection from High-Resolution VNIR Data
NASA Technical Reports Server (NTRS)
Varlyguin, Dmitry L.
2007-01-01
CASA is a fully automated software program for the per-pixel detection of clouds and cloud shadows from medium- (e.g., Landsat, SPOT, AWiFS) and high- (e.g., IKONOS, QuickBird, OrbView) resolution imagery without the use of thermal data. CASA is an object-based feature extraction program which utilizes a complex combination of spectral, spatial, and contextual information available in the imagery and the hierarchical self-learning logic for accurate detection of clouds and their shadows.
Automated Detection of Clouds in Satellite Imagery
NASA Technical Reports Server (NTRS)
Jedlovec, Gary
2010-01-01
Many different approaches have been used to automatically detect clouds in satellite imagery. Most approaches are deterministic and provide a binary cloud - no cloud product used in a variety of applications. Some of these applications require the identification of cloudy pixels for cloud parameter retrieval, while others require only an ability to mask out clouds for the retrieval of surface or atmospheric parameters in the absence of clouds. A few approaches estimate a probability of the presence of a cloud at each point in an image. These probabilities allow a user to select cloud information based on the tolerance of the application to uncertainty in the estimate. Many automated cloud detection techniques develop sophisticated tests using a combination of visible and infrared channels to determine the presence of clouds in both day and night imagery. Visible channels are quite effective in detecting clouds during the day, as long as test thresholds properly account for variations in surface features and atmospheric scattering. Cloud detection at night is more challenging, since only courser resolution infrared measurements are available. A few schemes use just two infrared channels for day and night cloud detection. The most influential factor in the success of a particular technique is the determination of the thresholds for each cloud test. The techniques which perform the best usually have thresholds that are varied based on the geographic region, time of year, time of day and solar angle.
NASA Astrophysics Data System (ADS)
Forster, Linda; Seefeldner, Meinhard; Wiegner, Matthias; Mayer, Bernhard
2017-07-01
Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs) indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques) field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT) provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Michael J.; Hayes, Daniel J
2014-01-01
Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less
Person detection and tracking with a 360° lidar system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2017-10-01
Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
1982-01-27
Visible 3. 3 Ea r th Location, Colocation, and Normalization 4. IMAGE ANALYSIS 4. 1 Interactive Capabilities 4.2 Examples 5. AUTOMATED CLOUD...computer Interactive Data Access System (McIDAS) before image analysis and algorithm development were done. Earth-location is an automated procedure to...the factor l / s in (SSE) toward the gain settings given in Table 5. 4. IMAGE ANALYSIS 4.1 Interactive Capabilities The development of automated
Algorithm for Automated Detection of Edges of Clouds
NASA Technical Reports Server (NTRS)
Ward, Jennifer G.; Merceret, Francis J.
2006-01-01
An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.
NASA Astrophysics Data System (ADS)
Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.
1996-12-01
Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.
NASA Astrophysics Data System (ADS)
Watmough, Gary R.; Atkinson, Peter M.; Hutton, Craig W.
2011-04-01
The automated cloud cover assessment (ACCA) algorithm has provided automated estimates of cloud cover for the Landsat ETM+ mission since 2001. However, due to the lack of a band around 1.375 μm, cloud edges and transparent clouds such as cirrus cannot be detected. Use of Landsat ETM+ imagery for terrestrial land analysis is further hampered by the relatively long revisit period due to a nadir only viewing sensor. In this study, the ACCA threshold parameters were altered to minimise omission errors in the cloud masks. Object-based analysis was used to reduce the commission errors from the extended cloud filters. The method resulted in the removal of optically thin cirrus cloud and cloud edges which are often missed by other methods in sub-tropical areas. Although not fully automated, the principles of the method developed here provide an opportunity for using otherwise sub-optimal or completely unusable Landsat ETM+ imagery for operational applications. Where specific images are required for particular research goals the method can be used to remove cloud and transparent cloud helping to reduce bias in subsequent land cover classifications.
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
Automated detection of Martian water ice clouds: the Valles Marineris
NASA Astrophysics Data System (ADS)
Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu
2016-10-01
We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds
Cloud detection algorithm comparison and validation for operational Landsat data products
Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady
2017-01-01
Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate nonthermal-based algorithm. We give preference to CFMask for operational cloud and cloud shadow detection, as it is derived from a priori knowledge of physical phenomena and is operable without geographic restriction, making it useful for current and future land imaging missions without having to be retrained in a machine-learning environment.
NASA Astrophysics Data System (ADS)
Zhong, Bo; Chen, Wuhan; Wu, Shanlong; Liu, Qinhuo
2016-10-01
Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors don't have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called OCM (Object oriented Cloud and cloud-shadow Matching method) is presented in this paper. It firstly modified the Automatic Cloud Cover Assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold setting produces different cloud map. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Secondly, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Thirdly, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. OCM method was tested using almost 200 HJ-1/CCD images across China and the overall accuracy of cloud detection is close to 90%.
NASA Astrophysics Data System (ADS)
Tokuyama, Sekito; Oka, Tomoharu; Takekawa, Shunya; Yamada, Masaya; Iwata, Yuhei; Tsujimoto, Shiho
2017-01-01
High-velocity compact clouds (HVCCs) is one of the populations of peculiar clouds detected in the Central Molecular Zone (CMZ) of our Galaxy. They have compact appearances (< 5 pc) and large velocity widths (> 50 km s-1). Several explanations for the origin of HVCC were proposed; e.g., a series of supernovae (SN) explosions (Oka et al. 1999) or a gravitational kick by a point-like gravitational source (Oka et al. 2016). To investigate the statistical property of HVCCs, a complete list of them is acutely necessary. However, the previous list is not complete since the identification procedure included automated processes and manual selection (Nagai 2008). Here we developed an automated procedure to identify HVCCs in a spectral line data.
Technique for ship/wake detection
Roskovensky, John K [Albuquerque, NM
2012-05-01
An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.
Automated cloud and shadow detection and filling using two-date Landsat imagery in the United States
Jin, Suming; Homer, Collin G.; Yang, Limin; Xian, George; Fry, Joyce; Danielson, Patrick; Townsend, Philip A.
2013-01-01
A simple, efficient, and practical approach for detecting cloud and shadow areas in satellite imagery and restoring them with clean pixel values has been developed. Cloud and shadow areas are detected using spectral information from the blue, shortwave infrared, and thermal infrared bands of Landsat Thematic Mapper or Enhanced Thematic Mapper Plus imagery from two dates (a target image and a reference image). These detected cloud and shadow areas are further refined using an integration process and a false shadow removal process according to the geometric relationship between cloud and shadow. Cloud and shadow filling is based on the concept of the Spectral Similarity Group (SSG), which uses the reference image to find similar alternative pixels in the target image to serve as replacement values for restored areas. Pixels are considered to belong to one SSG if the pixel values from Landsat bands 3, 4, and 5 in the reference image are within the same spectral ranges. This new approach was applied to five Landsat path/rows across different landscapes and seasons with various types of cloud patterns. Results show that almost all of the clouds were captured with minimal commission errors, and shadows were detected reasonably well. Among five test scenes, the lowest producer's accuracy of cloud detection was 93.9% and the lowest user's accuracy was 89%. The overall cloud and shadow detection accuracy ranged from 83.6% to 99.3%. The pixel-filling approach resulted in a new cloud-free image that appears seamless and spatially continuous despite differences in phenology between the target and reference images. Our methods offer a straightforward and robust approach for preparing images for the new 2011 National Land Cover Database production.
An Automated Road Roughness Detection from Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Kumar, P.; Angelats, E.
2017-05-01
Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.
Road traffic sign detection and classification from mobile LiDAR point clouds
NASA Astrophysics Data System (ADS)
Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng
2016-03-01
Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.
Improvements in AVHRR Daytime Cloud Detection Over the ARM NSA Site
NASA Technical Reports Server (NTRS)
Chakrapani, V.; Spangenberg, D. A.; Doelling, D. R.; Minnis, P.; Trepte, Q. Z.; Arduini, R. F.
2001-01-01
Clouds play an important role in the radiation budget over Arctic and Antarctic. Because of limited surface observing capabilities, it is necessary to detect clouds over large areas using satellite imagery. At low and mid-latitudes, satellite-observed visible (VIS; 0.65 micrometers) and infrared (IR; 11 micrometers) radiance data are used to derive cloud fraction, temperature, and optical depth. However, the extreme variability in the VIS surface albedo makes the detection of clouds from satellite a difficult process in polar regions. The IR data often show that the surface is nearly the same temperature or even colder than clouds, further complicating cloud detection. Also, the boundary layer can have large areas of haze, thin fog, or diamond dust that are not seen in standard satellite imagery. Other spectral radiances measured by satellite imagers provide additional information that can be used to more accurately discriminate clouds from snow and ice. Most techniques currently use a fixed reflectance or temperature threshold to decide between clouds and clear snow. Using a subjective approach, Minnis et al. (2001) found that the clear snow radiance signatures vary as a function of viewing and illumination conditions as well as snow condition. To routinely process satellite imagery over polar regions with an automated algorithm, it is necessary to account for this angular variability and the change in the background reflectance as snow melts, vegetation grows over land, and melt ponds form on pack ice. This paper documents the initial satellite-based cloud product over the Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) site at Barrow for use by the modeling community. Cloud amount and height are determined subjectively using an adaptation of the methodology of Minnis et al. (2001) and the radiation fields arc determined following the methods of Doelling et al. (2001) as applied to data taken during the Surface Heat and Energy Budget of the Arctic (SHEBA). The procedures and data produced in this empirically based analysis will also facilitate the development of the automated algorithm for future processing of satellite data over the ARM NSA domain. Results are presented for May, June, and July 1998. ARM surface data are use to partially validate the results taken directly over the ARM site.
Evaluation of Decision Trees for Cloud Detection from AVHRR Data
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Nemani, Ramakrishna
2005-01-01
Automated cloud detection and tracking is an important step in assessing changes in radiation budgets associated with global climate change via remote sensing. Data products based on satellite imagery are available to the scientific community for studying trends in the Earth's atmosphere. The data products include pixel-based cloud masks that assign cloud-cover classifications to pixels. Many cloud-mask algorithms have the form of decision trees. The decision trees employ sequential tests that scientists designed based on empirical astrophysics studies and simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In a previous study we compared automatically learned decision trees to cloud masks included in Advanced Very High Resolution Radiometer (AVHRR) data products from the year 2000. In this paper we report the replication of the study for five-year data, and for a gold standard based on surface observations performed by scientists at weather stations in the British Islands. For our sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks p < 0.001.
NASA Astrophysics Data System (ADS)
Bonev, George; Gladkova, Irina; Grossberg, Michael; Romanov, Peter; Helfrich, Sean
2016-09-01
The ultimate objective of this work is to improve characterization of the ice cover distribution in the polar areas, to improve sea ice mapping and to develop a new automated real-time high spatial resolution multi-sensor ice extent and ice edge product for use in operational applications. Despite a large number of currently available automated satellite-based sea ice extent datasets, analysts at the National Ice Center tend to rely on original satellite imagery (provided by satellite optical, passive microwave and active microwave sensors) mainly because the automated products derived from satellite optical data have gaps in the area coverage due to clouds and darkness, passive microwave products have poor spatial resolution, automated ice identifications based on radar data are not quite reliable due to a considerable difficulty in discriminating between the ice cover and rough ice-free ocean surface due to winds. We have developed a multisensor algorithm that first extracts maximum information on the sea ice cover from imaging instruments VIIRS and MODIS, including regions covered by thin, semitransparent clouds, then supplements the output by the microwave measurements and finally aggregates the results into a cloud gap free daily product. This ability to identify ice cover underneath thin clouds, which is usually masked out by traditional cloud detection algorithms, allows for expansion of the effective coverage of the sea ice maps and thus more accurate and detailed delineation of the ice edge. We have also developed a web-based monitoring system that allows comparison of our daily ice extent product with the several other independent operational daily products.
NASA Technical Reports Server (NTRS)
Shiffman, Smadar
2004-01-01
Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.
Automated Detection and Closing of Holes in Aerial Point Clouds Using AN Uas
NASA Astrophysics Data System (ADS)
Fiolka, T.; Rouatbi, F.; Bender, D.
2017-08-01
3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.
Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds
NASA Astrophysics Data System (ADS)
Thiele, S.; Grose, L.; Micklethwaite, S.
2016-12-01
UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.
Golberg, Alexander; Linshiz, Gregory; Kravets, Ilia; Stawski, Nina; Hillson, Nathan J; Yarmush, Martin L; Marks, Robert S; Konry, Tania
2014-01-01
We report an all-in-one platform - ScanDrop - for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a "cloud" network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2-4 days for other currently available standard detection methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorsen, Tyler J.; Fu, Qiang; Newsom, Rob K.
A Feature detection and EXtinction retrieval (FEX) algorithm for the Atmospheric Radiation Measurement (ARM) program’s Raman lidar (RL) has been developed. Presented here is part 1 of the FEX algorithm: the detection of features including both clouds and aerosols. The approach of FEX is to use multiple quantities— scattering ratios derived using elastic and nitro-gen channel signals from two fields of view, the scattering ratio derived using only the elastic channel, and the total volume depolarization ratio— to identify features using range-dependent detection thresholds. FEX is designed to be context-sensitive with thresholds determined for each profile by calculating the expectedmore » clear-sky signal and noise. The use of multiple quantities pro-vides complementary depictions of cloud and aerosol locations and allows for consistency checks to improve the accuracy of the feature mask. The depolarization ratio is shown to be particularly effective at detecting optically-thin features containing non-spherical particles such as cirrus clouds. Improve-ments over the existing ARM RL cloud mask are shown. The performance of FEX is validated against a collocated micropulse lidar and observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite over the ARM Darwin, Australia site. While we focus on a specific lidar system, the FEX framework presented here is suitable for other Raman or high spectral resolution lidars.« less
NASA Astrophysics Data System (ADS)
Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi
2017-04-01
Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.
Results from Automated Cloud and Dust Devil Detection Onboard the MER
NASA Technical Reports Server (NTRS)
Chien, Steve; Castano, Rebecca; Bornstein, Benjamin; Fukunaga, Alex; Castano, Andres; Biesiadecki, Jeffrey; Greeley, Ron; Whelley, Patrick; Lemmon, Mark
2008-01-01
We describe a new capability to automatically detect dust devils and clouds in imagery onboard rovers, enabling downlink of just the images with the targets or only portions of the images containing the targets. Previously, the MER rovers conducted campaigns to image dust devils and clouds by commanding a set of images be collected at fixed times and downloading the entire image set. By increasing the efficiency of the campaigns, more campaigns can be executed. Software for these new capabilities was developed, tested, integrated, uploaded, and operationally checked out on both rovers as part of the R9.2 software upgrade. In April 2007 on Sol 1147 a dust devil was automatically detected onboard the Spirit rover for the first time. We discuss the operational usage of the capability and present initial dust devil results showing how this preliminary application has demonstrated the feasibility and potential benefits of the approach.
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.
2011-01-01
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
Automating NEURON Simulation Deployment in Cloud Resources.
Stockton, David B; Santamaria, Fidel
2017-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.
Automating NEURON Simulation Deployment in Cloud Resources
Santamaria, Fidel
2016-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341
Cloud Type Classification (cldtype) Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flynn, Donna; Shi, Yan; Lim, K-S
The Cloud Type (cldtype) value-added product (VAP) provides an automated cloud type classification based on macrophysical quantities derived from vertically pointing lidar and radar. Up to 10 layers of clouds are classified into seven cloud types based on predetermined and site-specific thresholds of cloud top, base and thickness. Examples of thresholds for selected U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility sites are provided in Tables 1 and 2. Inputs for the cldtype VAP include lidar and radar cloud boundaries obtained from the Active Remotely Sensed Cloud Location (ARSCL) and Surface Meteorological Systems (MET) data. Rainmore » rates from MET are used to determine when radar signal attenuation precludes accurate cloud detection. Temporal resolution and vertical resolution for cldtype are 1 minute and 30 m respectively and match the resolution of ARSCL. The cldtype classification is an initial step for further categorization of clouds. It was developed for use by the Shallow Cumulus VAP to identify potential periods of interest to the LASSO model and is intended to find clouds of interest for a variety of users.« less
Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements
NASA Technical Reports Server (NTRS)
Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.
2009-01-01
Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.
Upgrades to the NOAA/NESDIS automated Cloud-Motion Vector system
NASA Technical Reports Server (NTRS)
Nieman, Steve; Menzel, W. Paul; Hayden, Christopher M.; Wanzong, Steve; Velden, Christopher S.
1993-01-01
The latest version of the automated cloud motion vector software has yielded significant improvements in the quality of the GOES cloud-drift winds produced operationally by NESDIS. Cloud motion vectors resulting from the automated system are now equal or superior in quality to those which had the benefit of manual quality control a few years ago. The single most important factor in this improvement has been the upgraded auto-editor. Improved tracer selection procedures eliminate targets in difficult regions and allow a higher target density and therefore enhanced coverage in areas of interest. The incorporation of the H2O-intercept height assignment method allows an adequate representation of the heights of semi-transparent clouds in the absence of a CO2-absorption channel. Finally, GOES-8 water-vapor motion winds resulting from the automated system are superior to any done previously by NESDIS and should now be considered as an operational product.
Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.
2017-09-01
Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.
Automated Visibility & Cloud Cover Measurements with a Solid State Imaging System
1989-03-01
GL-TR-89-0061 SIO Ref. 89-7 MPL-U-26/89 AUTOMATED VISIBILITY & CLOUD COVER MEASUREMENTS WITH A SOLID-STATE IMAGING SYSTEM C) to N4 R. W. Johnson W. S...include Security Classification) Automated Visibility & Cloud Measurements With A Solid State Imaging System 12. PERSONAL AUTHOR(S) Richard W. Johnson...based imaging systems , their ics and control algorithms, thus they ar.L discussed sepa- initial deployment and the preliminary application of rately
Kravets, Ilia; Stawski, Nina; Hillson, Nathan J.; Yarmush, Martin L.; Marks, Robert S.; Konry, Tania
2014-01-01
We report an all-in-one platform – ScanDrop – for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a “cloud” network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2–4 days for other currently available standard detection methods. PMID:24475107
Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
Feature extraction and classification of clouds in high resolution panchromatic satellite imagery
NASA Astrophysics Data System (ADS)
Sharghi, Elan
The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.
NASA Astrophysics Data System (ADS)
Meneguz, Elena; Turp, Debi; Wells, Helen
2015-04-01
It is well known that encounters with moderate or severe turbulence can lead to passenger injuries and incur high costs for airlines from compensation and litigation. As one of two World Area Forecast Centres (WAFCs), the Met Office has responsibility for forecasting en-route weather hazards worldwide for aviation above a height of 10,000 ft. Observations from commercial aircraft provide a basis for gaining a better understanding of turbulence and for improving turbulence forecasts through verification. However there is currently a lack of information regarding the possible cause of the observed turbulence, or whether the turbulence occurred within cloud. Such information would be invaluable for the development of forecasting techniques for particular types of turbulence and for forecast verification. Of all the possible sources of turbulence, convective activity is believed to be a major cause of turbulence. Its relative importance over the Europe and North Atlantic area has not been yet quantified in a systematic way: in this study, a new approach is developed to automate identification of turbulent encounters in the proximity of convective clouds. Observations of convection are provided from two independent sources: a surface based lightning network and satellite imagery. Lightning observations are taken from the Met Office Arrival Time Detections network (ATDnet). ATDnet has been designed to identify cloud-to-ground flashes over Europe but also detects (a smaller fraction of) strikes over the North Atlantic. Meteosat Second Generation (MSG) satellite products are used to identify convective clouds by applying a brightness temperature filtering technique. The morphological features of cold cloud tops are also investigated. The system is run for all in situ turbulence reports received from airlines for a total of 12 months during summer 2013 and 2014 for the domain of interest. Results of this preliminary short term climatological study show significant intra-seasonal variability and an average of 15% of all aircraft encounters with turbulence are found in the proximity of convective clouds.
Lidar Cloud Detection with Fully Convolutional Networks
NASA Astrophysics Data System (ADS)
Cromwell, E.; Flynn, D.
2017-12-01
The vertical distribution of clouds from active remote sensing instrumentation is a widely used data product from global atmospheric measuring sites. The presence of clouds can be expressed as a binary cloud mask and is a primary input for climate modeling efforts and cloud formation studies. Current cloud detection algorithms producing these masks do not accurately identify the cloud boundaries and tend to oversample or over-represent the cloud. This translates as uncertainty for assessing the radiative impact of clouds and tracking changes in cloud climatologies. The Atmospheric Radiation Measurement (ARM) program has over 20 years of micro-pulse lidar (MPL) and High Spectral Resolution Lidar (HSRL) instrument data and companion automated cloud mask product at the mid-latitude Southern Great Plains (SGP) and the polar North Slope of Alaska (NSA) atmospheric observatory. Using this data, we train a fully convolutional network (FCN) with semi-supervised learning to segment lidar imagery into geometric time-height cloud locations for the SGP site and MPL instrument. We then use transfer learning to train a FCN for (1) the MPL instrument at the NSA site and (2) for the HSRL. In our semi-supervised approach, we pre-train the classification layers of the FCN with weakly labeled lidar data. Then, we facilitate end-to-end unsupervised pre-training and transition to fully supervised learning with ground truth labeled data. Our goal is to improve the cloud mask accuracy and precision for the MPL instrument to 95% and 80%, respectively, compared to the current cloud mask algorithms of 89% and 50%. For the transfer learning based FCN for the HSRL instrument, our goal is to achieve a cloud mask accuracy of 90% and a precision of 80%.
NASA Astrophysics Data System (ADS)
Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.
2016-06-01
High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.
Martins, Cristina; Moreira da Silva, Nadia; Silva, Guilherme; Rozanski, Verena E; Silva Cunha, Joao Paulo
2016-08-01
Hippocampal sclerosis (HS) is the most common cause of temporal lobe epilepsy (TLE) and can be identified in magnetic resonance imaging as hippocampal atrophy and subsequent volume loss. Detecting this kind of abnormalities through simple radiological assessment could be difficult, even for experienced radiologists. For that reason, hippocampal volumetry is generally used to support this kind of diagnosis. Manual volumetry is the traditional approach but it is time consuming and requires the physician to be familiar with neuroimaging software tools. In this paper, we propose an automated method, written as a script that uses FSL-FIRST, to perform hippocampal segmentation and compute an index to quantify hippocampi asymmetry (HAI). We compared the automated detection of HS (left or right) based on the HAI with the agreement of two experts in a group of 19 patients and 15 controls, achieving 84.2% sensitivity, 86.7% specificity and a Cohen's kappa coefficient of 0.704. The proposed method is integrated in the "Advanced Brain Imaging Lab" (ABrIL) cloud neurocomputing platform. The automated procedure is 77% (on average) faster to compute vs. the manual volumetry segmentation performed by an experienced physician.
A multiscale curvature algorithm for classifying discrete return LiDAR in forested environments
Jeffrey S. Evans; Andrew T. Hudak
2007-01-01
One prerequisite to the use of light detection and ranging (LiDAR) across disciplines is differentiating ground from nonground returns. The objective was to automatically and objectively classify points within unclassified LiDAR point clouds, with few model parameters and minimal postprocessing. Presented is an automated method for classifying LiDAR returns as ground...
Bhavani, Selvaraj Rani; Senthilkumar, Jagatheesan; Chilambuchelvan, Arul Gnanaprakasam; Manjula, Dhanabalachandran; Krishnamoorthy, Ramasamy; Kannan, Arputharaj
2015-03-27
The Internet has greatly enhanced health care, helping patients stay up-to-date on medical issues and general knowledge. Many cancer patients use the Internet for cancer diagnosis and related information. Recently, cloud computing has emerged as a new way of delivering health services but currently, there is no generic and fully automated cloud-based self-management intervention for breast cancer patients, as practical guidelines are lacking. We investigated the prevalence and predictors of cloud use for medical diagnosis among women with breast cancer to gain insight into meaningful usage parameters to evaluate the use of generic, fully automated cloud-based self-intervention, by assessing how breast cancer survivors use a generic self-management model. The goal of this study was implemented and evaluated with a new prototype called "CIMIDx", based on representative association rules that support the diagnosis of medical images (mammograms). The proposed Cloud-Based System Support Intelligent Medical Image Diagnosis (CIMIDx) prototype includes two modules. The first is the design and development of the CIMIDx training and test cloud services. Deployed in the cloud, the prototype can be used for diagnosis and screening mammography by assessing the cancers detected, tumor sizes, histology, and stage of classification accuracy. To analyze the prototype's classification accuracy, we conducted an experiment with data provided by clients. Second, by monitoring cloud server requests, the CIMIDx usage statistics were recorded for the cloud-based self-intervention groups. We conducted an evaluation of the CIMIDx cloud service usage, in which browsing functionalities were evaluated from the end-user's perspective. We performed several experiments to validate the CIMIDx prototype for breast health issues. The first set of experiments evaluated the diagnostic performance of the CIMIDx framework. We collected medical information from 150 breast cancer survivors from hospitals and health centers. The CIMIDx prototype achieved high sensitivity of up to 99.29%, and accuracy of up to 98%. The second set of experiments evaluated CIMIDx use for breast health issues, using t tests and Pearson chi-square tests to assess differences, and binary logistic regression to estimate the odds ratio (OR) for the predictors' use of CIMIDx. For the prototype usage statistics for the same 150 breast cancer survivors, we interviewed 114 (76.0%), through self-report questionnaires from CIMIDx blogs. The frequency of log-ins/person ranged from 0 to 30, total duration/person from 0 to 1500 minutes (25 hours). The 114 participants continued logging in to all phases, resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). The overall performance of the prototype for the good category, reported usefulness of the prototype (P=.77), overall satisfaction of the prototype (P=.31), ease of navigation (P=.89), user friendliness evaluation (P=.31), and overall satisfaction (P=.31). Positive evaluations given by 100 participants via a Web-based questionnaire supported our hypothesis. The present study shows that women felt favorably about the use of a generic fully automated cloud-based self- management prototype. The study also demonstrated that the CIMIDx prototype resulted in the detection of more cancers in screening and diagnosing patients, with an increased accuracy rate.
A Practical and Automated Approach to Large Area Forest Disturbance Mapping with Remote Sensing
Ozdogan, Mutlu
2014-01-01
In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions. PMID:24717283
A practical and automated approach to large area forest disturbance mapping with remote sensing.
Ozdogan, Mutlu
2014-01-01
In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.
Cloud properties inferred from 8-12 micron data
NASA Technical Reports Server (NTRS)
Strabala, Kathleen I.; Ackerman, Steven A.; Menzel, W. Paul
1994-01-01
A trispectral combination of observations at 8-, 11-, and 12-micron bands is suggested for detecting cloud and cloud properties in the infrared. Atmospheric ice and water vapor absorption peak in opposite halves of the window region so that positive 8-minus-11-micron brightness temperature differences indicate cloud, while near-zero or negative differences indicate clear regions. The absorption coefficient for water increases more between 11 and 12 microns than between 8 and 11 microns, while for ice, the reverse is true. Cloud phases is determined by a scatter diagram of 8-minus-11-micron versus 11-minus-12-micron brightness temperature differences; ice cloud shows a slope greater than 1 and water cloud less than 1. The trispectral brightness temperature method was tested upon high-resolution interferometer data resulting in clear-cloud and cloud-phase delineation. Simulations using differing 8-micron bandwidths revealed no significant degradation of cloud property detection. Thus, the 8-micron bandwidth for future satellites can be selected based on the requirements of other applications, such as surface characterization studies. Application of the technique to current polar-orbiting High-Resolution Infrared Sounder (HIRS)-Advanced Very High Resolution Radiometer (AVHRR) datasets is constrained by the nonuniformity of the cloud scenes sensed within the large HIRS field of view. Analysis of MAS (MODIS Airborne Simulator) high-spatial resolution (500 m) data with all three 8-, 11-, and 12-micron bands revealed sharp delineation of differing cloud and background scenes, from which a simple automated threshold technique was developed. Cloud phase, clear-sky, and qualitative differences in cloud emissivity and cloud height were identified on a case study segment from 24 November 1991, consistent with the scene. More rigorous techniques would allow further cloud parameter clarification. The opportunities for global cloud delineation with the Moderate-Resolution Imaging Spectrometer (MODIS) appear excellent. The spectral selection, the spatial resolution, and the global coverage are all well suited for significant advances.
Geometric identification and damage detection of structural elements by terrestrial laser scanner
NASA Astrophysics Data System (ADS)
Hou, Tsung-Chin; Liu, Yu-Wei; Su, Yu-Min
2016-04-01
In recent years, three-dimensional (3D) terrestrial laser scanning technologies with higher precision and higher capability are developing rapidly. The growing maturity of laser scanning has gradually approached the required precision as those have been provided by traditional structural monitoring technologies. Together with widely available fast computation for massive point cloud data processing, 3D laser scanning can serve as an efficient structural monitoring alternative for civil engineering communities. Currently most research efforts have focused on integrating/calculating the measured multi-station point cloud data, as well as modeling/establishing the 3D meshes of the scanned objects. Very little attention has been spent on extracting the information related to health conditions and mechanical states of structures. In this study, an automated numerical approach that integrates various existing algorithms for geometric identification and damage detection of structural elements were established. Specifically, adaptive meshes were employed for classifying the point cloud data of the structural elements, and detecting the associated damages from the calculated eigenvalues in each area of the structural element. Furthermore, kd-tree was used to enhance the searching efficiency of plane fitting which were later used for identifying the boundaries of structural elements. The results of geometric identification were compared with M3C2 algorithm provided by CloudCompare, as well as validated by LVDT measurements of full-scale reinforced concrete beams tested in laboratory. It shows that 3D laser scanning, through the established processing approaches of the point cloud data, can offer a rapid, nondestructive, remote, and accurate solution for geometric identification and damage detection of structural elements.
Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery
NASA Astrophysics Data System (ADS)
Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.
2009-05-01
In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.
2015-01-01
Background The Internet has greatly enhanced health care, helping patients stay up-to-date on medical issues and general knowledge. Many cancer patients use the Internet for cancer diagnosis and related information. Recently, cloud computing has emerged as a new way of delivering health services but currently, there is no generic and fully automated cloud-based self-management intervention for breast cancer patients, as practical guidelines are lacking. Objective We investigated the prevalence and predictors of cloud use for medical diagnosis among women with breast cancer to gain insight into meaningful usage parameters to evaluate the use of generic, fully automated cloud-based self-intervention, by assessing how breast cancer survivors use a generic self-management model. The goal of this study was implemented and evaluated with a new prototype called “CIMIDx”, based on representative association rules that support the diagnosis of medical images (mammograms). Methods The proposed Cloud-Based System Support Intelligent Medical Image Diagnosis (CIMIDx) prototype includes two modules. The first is the design and development of the CIMIDx training and test cloud services. Deployed in the cloud, the prototype can be used for diagnosis and screening mammography by assessing the cancers detected, tumor sizes, histology, and stage of classification accuracy. To analyze the prototype’s classification accuracy, we conducted an experiment with data provided by clients. Second, by monitoring cloud server requests, the CIMIDx usage statistics were recorded for the cloud-based self-intervention groups. We conducted an evaluation of the CIMIDx cloud service usage, in which browsing functionalities were evaluated from the end-user’s perspective. Results We performed several experiments to validate the CIMIDx prototype for breast health issues. The first set of experiments evaluated the diagnostic performance of the CIMIDx framework. We collected medical information from 150 breast cancer survivors from hospitals and health centers. The CIMIDx prototype achieved high sensitivity of up to 99.29%, and accuracy of up to 98%. The second set of experiments evaluated CIMIDx use for breast health issues, using t tests and Pearson chi-square tests to assess differences, and binary logistic regression to estimate the odds ratio (OR) for the predictors’ use of CIMIDx. For the prototype usage statistics for the same 150 breast cancer survivors, we interviewed 114 (76.0%), through self-report questionnaires from CIMIDx blogs. The frequency of log-ins/person ranged from 0 to 30, total duration/person from 0 to 1500 minutes (25 hours). The 114 participants continued logging in to all phases, resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). The overall performance of the prototype for the good category, reported usefulness of the prototype (P=.77), overall satisfaction of the prototype (P=.31), ease of navigation (P=.89), user friendliness evaluation (P=.31), and overall satisfaction (P=.31). Positive evaluations given by 100 participants via a Web-based questionnaire supported our hypothesis. Conclusions The present study shows that women felt favorably about the use of a generic fully automated cloud-based self- management prototype. The study also demonstrated that the CIMIDx prototype resulted in the detection of more cancers in screening and diagnosing patients, with an increased accuracy rate. PMID:25830608
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Default Parallels Plesk Panel Page
services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this
Classification of Mobile Laser Scanning Point Clouds from Height Features
NASA Astrophysics Data System (ADS)
Zheng, M.; Lemmens, M.; van Oosterom, P.
2017-09-01
The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.
Helmet-Mounted Display Of Clouds Of Harmful Gases
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Barengoltz, Jack B.; Schober, Wayne R.
1995-01-01
Proposed helmet-mounted opto-electronic instrument provides real-time stereoscopic views of clouds of otherwise invisible toxic, explosive, and/or corrosive gas. Display semitransparent: images of clouds superimposed on scene ordinarily visible to wearer. Images give indications on sizes and concentrations of gas clouds and their locations in relation to other objects in scene. Instruments serve as safety devices for astronauts, emergency response crews, fire fighters, people cleaning up chemical spills, or anyone working near invisible hazardous gases. Similar instruments used as sensors in automated emergency response systems that activate safety equipment and emergency procedures. Both helmet-mounted and automated-sensor versions used at industrial sites, chemical plants, or anywhere dangerous and invisible or difficult-to-see gases present. In addition to helmet-mounted and automated-sensor versions, there could be hand-held version. In some industrial applications, desirable to mount instruments and use them similarly to parking-lot surveillance cameras.
Volunteered Cloud Computing for Disaster Management
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Practical Implementation of Semi-Automated As-Built Bim Creation for Complex Indoor Environments
NASA Astrophysics Data System (ADS)
Yoon, S.; Jung, J.; Heo, J.
2015-05-01
In recent days, for efficient management and operation of existing buildings, the importance of as-built BIM is emphasized in AEC/FM domain. However, fully automated as-built BIM creation is a tough issue since newly-constructed buildings are becoming more complex. To manage this problem, our research group has developed a semi-automated approach, focusing on productive 3D as-built BIM creation for complex indoor environments. In order to test its feasibility for a variety of complex indoor environments, we applied the developed approach to model the `Charlotte stairs' in Lotte World Mall, Korea. The approach includes 4 main phases: data acquisition, data pre-processing, geometric drawing, and as-built BIM creation. In the data acquisition phase, due to its complex structure, we moved the scanner location several times to obtain the entire point clouds of the test site. After which, data pre-processing phase entailing point-cloud registration, noise removal, and coordinate transformation was followed. The 3D geometric drawing was created using the RANSAC-based plane detection and boundary tracing methods. Finally, in order to create a semantically-rich BIM, the geometric drawing was imported into the commercial BIM software. The final as-built BIM confirmed that the feasibility of the proposed approach in the complex indoor environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martini, Matus N.; Gustafson, William I.; Yang, Qing
2014-11-18
Organized mesoscale cellular convection (MCC) is a common feature of marine stratocumulus that forms in response to a balance between mesoscale dynamics and smaller scale processes such as cloud radiative cooling and microphysics. We use the Weather Research and Forecasting model with chemistry (WRF-Chem) and fully coupled cloud-aerosol interactions to simulate marine low clouds during the VOCALS-REx campaign over the southeast Pacific. A suite of experiments with 3- and 9-km grid spacing indicates resolution-dependent behavior. The simulations with finer grid spacing have smaller liquid water paths and cloud fractions, while cloud tops are higher. The observed diurnal cycle is reasonablymore » well simulated. To isolate organized MCC characteristics we develop a new automated method, which uses a variation of the watershed segmentation technique that combines the detection of cloud boundaries with a test for coincident vertical velocity characteristics. This ensures that the detected cloud fields are dynamically consistent for closed MCC, the most common MCC type over the VOCALS-REx region. We demonstrate that the 3-km simulation is able to reproduce the scaling between horizontal cell size and boundary layer height seen in satellite observations. However, the 9-km simulation is unable to resolve smaller circulations corresponding to shallower boundary layers, instead producing invariant MCC horizontal scale for all simulated boundary layers depths. The results imply that climate models with grid spacing of roughly 3 km or smaller may be needed to properly simulate the MCC structure in the marine stratocumulus regions.« less
A Validation of Remotely Sensed Fires Using Ground Reports
NASA Astrophysics Data System (ADS)
Ruminski, M. G.; Hanna, J.
2007-12-01
A satellite based analysis of fire detections and smoke emissions for North America is produced daily by NOAA/NESDIS. The analysis incorporates data from the MODIS (Terra and Aqua) and AVHRR (NOAA-15/16/17) polar orbiting instruments and GOES East and West geostationary spacecraft with nominal resolutions of 1km and 4 km for the polar and geostationary platforms respectively. Automated fire detection algorithms are utilized for each of the sensors. Analysts perform a quality control procedure on the automated detects by deleting points that are deemed to be false detects and adding points that the algorithms did not detect. A limited validation of the final quality controlled product was performed using high resolution (30 m) ASTER data in the summer of 2006. Some limitations in using ASTER data are that each scene is only approximately 3600 square km, the data acquisition time is relatively constant at around 1030 local solar time and ASTER is another remotely sensed data source. This study expands on the ASTER validation by using ground reports of prescribed burns in Montana and Idaho for 2003 and 2004. It provides a non-remote sensing data source for comparison. While the ground data do not have the limitations noted above for ASTER there are still limitations. For example, even though the data set covers a much larger area (nearly 600,000 square km) than even several ASTER scenes, it still represents a single region of North America. And while the ground data are not restricted to a narrow time window, only a date is provided with each report, limiting the ability to make detailed conclusions about the detection capabilities for specific instruments, especially for the less temporally frequent polar orbiting MODIS and AVHRR sensors. Comparison of the ground data reports to the quality controlled fire analysis revealed a low rate of overall detection of 23.00% over the entire study period. Examination of the daily detection rates revealed a wide variation, with some days resulting in as little as 5 detects out of 107 reported fires while other days had as many as 84 detections out of 160 reports. Inspection of the satellite imagery from the days with very low detection rates revealed that extensive cloud cover prohibited satellite fire detection. On days when cloud cover was at a minimum, detection rates were substantially higher. An estimate of the fire size was also provided with the ground data set. Statistics will be presented for days with minimal cloud cover which will indicate the probability of detection for fires of various sizes.
Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services
NASA Astrophysics Data System (ADS)
Collins, Patrick; Bahr, Thomas
2016-04-01
The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.
An automated fog monitoring system for the Indo-Gangetic Plains based on satellite measurements
NASA Astrophysics Data System (ADS)
Patil, Dinesh; Chourey, Reema; Rizvi, Sarwar; Singh, Manoj; Gautam, Ritesh
2016-05-01
Fog is a meteorological phenomenon that causes reduction in regional visibility and affects air quality, thus leading to various societal and economic implications, especially disrupting air and rail transportation. The persistent and widespread winter fog impacts the entire the Indo-Gangetic Plains (IGP), as frequently observed in satellite imagery. The IGP is a densely populated region in south Asia, inhabiting about 1/6th of the world's population, with a strong upward pollution trend. In this study, we have used multi-spectral radiances and aerosol/cloud retrievals from Terra/Aqua MODIS data for developing an automated web-based fog monitoring system over the IGP. Using our previous and existing methodologies, and ongoing algorithm development for the detection of fog and retrieval of associated microphysical properties (e.g. fog droplet effective radius), we characterize the widespread fog detection during both daytime and nighttime. Specifically, for the night time fog detection, the algorithm employs a satellite-based bi-spectral brightness temperature difference technique between two spectral channels: MODIS band-22 (3.9μm) and band-31 (10.75μm). Further, we are extending our algorithm development to geostationary satellites, for providing continuous monitoring of the spatial-temporal variation of fog. We anticipate that the ongoing and future development of a fog monitoring system would be of assistance to air, rail and vehicular transportation management, as well as for dissemination of fog information to government agencies and general public. The outputs of fog detection algorithm and related aerosol/cloud parameters are operationally disseminated via http://fogsouthasia.com/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Qiang; Comstock, Jennifer
The overall objective of this ASR funded project is to investigate the role of cloud radiative effects, especially those associated with tropical thin cirrus clouds in the tropical tropopause layer, by analyzing the ARM observations combined with numerical models. In particular, we have processed and analyzed the observations from the Raman lidar at the ARM SGP and TWP sites. In the tenure of the project (8/15/2013 – 8/14/2016 and with a no-cost extension to 8/14/2017), we have been concentrating on (i) developing an automated feature detection scheme of clouds and aerosols for the ARM Raman lidar; (ii) developing an automatedmore » retrieval of cloud and aerosol extinctions for the ARM Raman lidar; (iii) investigating cloud radiative effects based on the observations on the simulated temperatures in the tropical tropopause layer using a radiative-convective model; and (iv) examining the effect of changes of atmospheric composition on the tropical lower-stratospheric temperatures. In addition, we have examined the biases in the CALIPSO-inferred aerosol direct radiative effects using ground-based Raman lidars at the ARM SGP and TWP sites, and estimated the impact of lidar detection sensitivity on assessing global aerosol direct radiative effects. We have also investigated the diurnal cycle of clouds and precipitation at the ARM site using the cloud radar observations along with simulations from the multiscale modeling framework. The main results of our research efforts are reported in the six referred journal publications that acknowledge the DOE Grant DE-SC0010557.« less
NASA Technical Reports Server (NTRS)
Arnott, William P.; Hallett, John; Hudson, James G.
1995-01-01
Specific measurement of cirrus crystals by aircraft and temperature modified CN are used to specify measurements necessary to provide a basis for a conceptual model of cirrus particle formation. Key to this is the ability to measure the complete spectrum of particles at cirrus levels. The most difficult regions for such measurement is from a few to 100 microns, and uses a replicator. The details of the system to automate replicator data analysis are given, together with an example case study of the system provided from a cirrus cloud in FIRE 2, with particles detectable by replicator and FSSP, but not 2DC.
Evolution and Advances in Satellite Analysis of Volcanoes
NASA Astrophysics Data System (ADS)
Dean, K. G.; Dehn, J.; Webley, P.; Bailey, J.
2008-12-01
Over the past 20 years satellite data used for monitoring and analysis of volcanic eruptions has evolved in terms of timeliness, access, distribution, resolution and understanding of volcanic processes. Initially satellite data was used for retrospective analysis but has evolved to proactive monitoring systems. Timely acquisition of data and the capability to distribute large data files paralleled advances in computer technology and was a critical component for near real-time monitoring. The sharing of these data and resulting discussions has improved our understanding of eruption processes and, even more importantly, their impact on society. To illustrate this evolution, critical scientific discoveries will be highlighted, including detection of airborne ash and sulfur dioxide, cloud-height estimates, prediction of ash cloud movement, and detection of thermal anomalies as precursor-signals to eruptions. AVO has been a leader in implementing many of these advances into an operational setting such as, automated eruption detection, database analysis systems, and remotely accessible web-based analysis systems. Finally, limitations resulting from trade-offs between resolution and how they impact some weakness in detection techniques and hazard assessments will be presented.
Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio
2017-03-06
In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user's home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered.
Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing
Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio
2017-01-01
In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user’s home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered. PMID:28272305
Bao, Riyue; Hernandez, Kyle; Huang, Lei; Kang, Wenjun; Bartom, Elizabeth; Onel, Kenan; Volchenboum, Samuel; Andrade, Jorge
2015-01-01
Whole exome sequencing has facilitated the discovery of causal genetic variants associated with human diseases at deep coverage and low cost. In particular, the detection of somatic mutations from tumor/normal pairs has provided insights into the cancer genome. Although there is an abundance of publicly-available software for the detection of germline and somatic variants, concordance is generally limited among variant callers and alignment algorithms. Successful integration of variants detected by multiple methods requires in-depth knowledge of the software, access to high-performance computing resources, and advanced programming techniques. We present ExScalibur, a set of fully automated, highly scalable and modulated pipelines for whole exome data analysis. The suite integrates multiple alignment and variant calling algorithms for the accurate detection of germline and somatic mutations with close to 99% sensitivity and specificity. ExScalibur implements streamlined execution of analytical modules, real-time monitoring of pipeline progress, robust handling of errors and intuitive documentation that allows for increased reproducibility and sharing of results and workflows. It runs on local computers, high-performance computing clusters and cloud environments. In addition, we provide a data analysis report utility to facilitate visualization of the results that offers interactive exploration of quality control files, read alignment and variant calls, assisting downstream customization of potential disease-causing mutations. ExScalibur is open-source and is also available as a public image on Amazon cloud.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Szoeke, Simon P.
The investigator and DOE-supported student [1] retrieved vertical air velocity and microphysical fall velocity retrieval for VOCALS and CAP-MBL homogeneous clouds. [2] Calculated in-cloud and cloud top dissipation calculation and diurnal cycle computed for VOCALS. [3] Compared CAP-MBL Doppler cloud radar scenes with (Remillard et al. 2012) automated classification.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
3D modeling of building indoor spaces and closed doors from imagery and point clouds.
Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro
2015-02-03
3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.
Erosion and Channel Incision Analysis with High-Resolution Lidar
NASA Astrophysics Data System (ADS)
Potapenko, J.; Bookhagen, B.
2013-12-01
High-resolution LiDAR (LIght Detection And Ranging) provides a new generation of sub-meter topographic data that is still to be fully exploited by the Earth science communities. We make use of multi-temporal airborne and terrestrial lidar scans in the south-central California and Santa Barbara area. Specifically, we have investigated the Mission Canyon and Channel Islands regions from 2009-2011 to study changes in erosion and channel incision on the landscape. In addition to gridding the lidar data into digital elevation models (DEMs), we also make use of raw lidar point clouds and triangulated irregular networks (TINs) for detailed analysis of heterogeneously spaced topographic data. Using recent advancements in lidar point cloud processing from information technology disciplines, we have employed novel lidar point cloud processing and feature detection algorithms to automate the detection of deeply incised channels and gullies, vegetation, and other derived metrics (e.g. estimates of eroded volume). Our analysis compares topographically-derived erosion volumes to field-derived cosmogenic radionuclide age and in-situ sediment-flux measurements. First results indicate that gully erosion accounts for up to 60% of the sediment volume removed from the Mission Canyon region. Furthermore, we observe that gully erosion and upstream arroyo propagation accelerated after fires, especially in regions where vegetation was heavily burned. The use of high-resolution lidar point cloud data for topographic analysis is still a novel method that needs more precedent and we hope to provide a cogent example of this approach with our research.
NASA Astrophysics Data System (ADS)
Niggemann, F.; Appel, F.; Bach, H.; de la Mar, J.; Schirpke, B.; Dutting, K.; Rucker, G.; Leimbach, D.
2015-04-01
To address the challenges of effective data handling faced by Small and Medium Sized Enterprises (SMEs) a cloud-based infrastructure for accessing and processing of Earth Observation(EO)-data has been developed within the project APPS4GMES(www.apps4gmes.de). To gain homogenous multi mission data access an Input Data Portal (IDP) been implemented on this infrastructure. The IDP consists of an Open Geospatial Consortium (OGC) conformant catalogue, a consolidation module for format conversion and an OGC-conformant ordering framework. Metadata of various EO-sources and with different standards is harvested and transferred to an OGC conformant Earth Observation Product standard and inserted into the catalogue by a Metadata Harvester. The IDP can be accessed for search and ordering of the harvested datasets by the services implemented on the cloud infrastructure. Different land-surface services have been realised by the project partners, using the implemented IDP and cloud infrastructure. Results of these are customer ready products, as well as pre-products (e.g. atmospheric corrected EO data), serving as a basis for other services. Within the IDP an automated access to ESA's Sentinel-1 Scientific Data Hub has been implemented. Searching and downloading of the SAR data can be performed in an automated way. With the implementation of the Sentinel-1 Toolbox and own software, for processing of the datasets for further use, for example for Vista's snow monitoring, delivering input for the flood forecast services, can also be performed in an automated way. For performance tests of the cloud environment a sophisticated model based atmospheric correction and pre-classification service has been implemented. Tests conducted an automated synchronised processing of one entire Landsat 8 (LS-8) coverage for Germany and performance comparisons to standard desktop systems. Results of these tests, showing a performance improvement by the factor of six, proved the high flexibility and computing power of the cloud environment. To make full use of the cloud capabilities a possibility for automated upscaling of the hardware resources has been implemented. Together with the IDP infrastructure fast and automated processing of various satellite sources to deliver market ready products can be realised, thus increasing customer needs and numbers can be satisfied without loss of accuracy and quality.
Automated Cloud Observation for Ground Telescope Optimization
NASA Astrophysics Data System (ADS)
Lane, B.; Jeffries, M. W., Jr.; Therien, W.; Nguyen, H.
As the number of man-made objects placed in space each year increases with advancements in commercial, academic and industry, the number of objects required to be detected, tracked, and characterized continues to grow at an exponential rate. Commercial companies, such as ExoAnalytic Solutions, have deployed ground based sensors to maintain track custody of these objects. For the ExoAnalytic Global Telescope Network (EGTN), observation of such objects are collected at the rate of over 10 million unique observations per month (as of September 2017). Currently, the EGTN does not optimally collect data on nights with significant cloud levels. However, a majority of these nights prove to be partially cloudy providing clear portions in the sky for EGTN sensors to observe. It proves useful for a telescope to utilize these clear areas to continue resident space object (RSO) observation. By dynamically updating the tasking with the varying cloud positions, the number of observations could potentially increase dramatically due to increased persistence, cadence, and revisit. This paper will discuss the recent algorithms being implemented within the EGTN, including the motivation, need, and general design. The use of automated image processing as well as various edge detection methods, including Canny, Sobel, and Marching Squares, on real-time large FOV images of the sky enhance the tasking and scheduling of a ground based telescope is discussed in Section 2. Implementations of these algorithms on single and expanding to multiple telescopes, will be explored. Results of applying these algorithms to the EGTN in real-time and comparison to non-optimized EGTN tasking is presented in Section 3. Finally, in Section 4 we explore future work in applying these throughout the EGTN as well as other optical telescopes.
CloudSat Reflectivity Data Visualization Inside Hurricanes
NASA Technical Reports Server (NTRS)
Suzuki, Shigeru; Wright, John R.; Falcon, Pedro C.
2011-01-01
Animations and other outreach products have been created and released to the public quickly after the CloudSat spacecraft flew over hurricanes. The automated script scans through the CloudSat quicklook data to find significant atmospheric moisture content. Once such a region is found, data from multiple sources is combined to produce the data products and the animations. KMZ products are quickly generated from the quicklook data for viewing in Google Earth and other tools. Animations are also generated to show the atmospheric moisture data in context with the storm cloud imagery. Global images from GOES satellites are shown to give context. The visualization provides better understanding of the interior of the hurricane storm clouds, which is difficult to observe directly. The automated process creates the finished animation in the High Definition (HD) video format for quick release to the media and public.
Automated object detection and tracking with a flash LiDAR system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2016-10-01
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data distortion of most LiDAR systems. The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis, point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor. As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly volatile and rapid changes in the direction of motion the object is kept in the field of view. The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances (20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a boat or an UAV in various distances.
An automated method for tracking clouds in planetary atmospheres
NASA Astrophysics Data System (ADS)
Luz, D.; Berry, D. L.; Roos-Serote, M.
2008-05-01
We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter's white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter's atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time.
Organizational principles of cloud storage to support collaborative biomedical research.
Kanbar, Lara J; Shalish, Wissam; Robles-Rubio, Carlos A; Precup, Doina; Brown, Karen; Sant'Anna, Guilherme M; Kearney, Robert E
2015-08-01
This paper describes organizational guidelines and an anonymization protocol for the management of sensitive information in interdisciplinary, multi-institutional studies with multiple collaborators. This protocol is flexible, automated, and suitable for use in cloud-based projects as well as for publication of supplementary information in journal papers. A sample implementation of the anonymization protocol is illustrated for an ongoing study dealing with Automated Prediction of EXtubation readiness (APEX).
Day/night whole sky imagers for 24-h cloud and sky assessment: history and overview.
Shields, Janet E; Karr, Monette E; Johnson, Richard W; Burden, Art R
2013-03-10
A family of fully automated digital whole sky imagers (WSIs) has been developed at the Marine Physical Laboratory over many years, for a variety of research and military applications. The most advanced of these, the day/night whole sky imagers (D/N WSIs), acquire digital imagery of the full sky down to the horizon under all conditions from full sunlight to starlight. Cloud algorithms process the imagery to automatically detect the locations of cloud for both day and night. The instruments can provide absolute radiance distribution over the full radiance range from starlight through daylight. The WSIs were fielded in 1984, followed by the D/N WSIs in 1992. These many years of experience and development have resulted in very capable instruments and algorithms that remain unique. This article discusses the history of the development of the D/N WSIs, system design, algorithms, and data products. The paper cites many reports with more detailed technical documentation. Further details of calibration, day and night algorithms, and cloud free line-of-sight results will be discussed in future articles.
Automated cloud classification using a ground based infra-red camera and texture analysis techniques
NASA Astrophysics Data System (ADS)
Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.
2013-10-01
Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.
Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min
2015-06-01
The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.
Virtual Sensors: Using Data Mining to Efficiently Estimate Spectra
NASA Technical Reports Server (NTRS)
Srivastava, Ashok; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Detecting clouds within a satellite image is essential for retrieving surface geophysical parameters, such as albedo and temperature, from optical and thermal imagery because the retrieval methods tend to be valid for clear skies only. Thus, routine satellite data processing requires reliable automated cloud detection algorithms that are applicable to many surface types. Unfortunately, cloud detection over snow and ice is difficult due to the lack of spectral contrast between clouds and snow. Snow and clouds are both highly reflective in the visible wavelen,ats and often show little contrast in the thermal Infrared. However, at 1.6 microns, the spectral signatures of snow and clouds differ enough to allow improved snow/ice/cloud discrimination. The recent Terra and Aqua Moderate Resolution Imaging Spectro-Radiometer (MODIS) sensors have a channel (channel 6) at 1.6 microns. Presently the most comprehensive, long-term information on surface albedo and temperature over snow- and ice-covered surfaces comes from the Advanced Very High Resolution Radiometer ( AVHRR) sensor that has been providing imagery since July 1981. The earlier AVHRR sensors (e.g. AVHRR/2) did not however have a channel designed for discriminating clouds from snow, such as the 1.6 micron channel available on the more recent AVHRR/3 or the MODIS sensors. In the absence of the 1.6 micron channel, the AVHRR Polar Pathfinder (APP) product performs cloud detection using a combination of time-series analysis and multispectral threshold tests based on the satellite's measuring channels to produce a cloud mask. The method has been found to work reasonably well over sea ice, but not so well over the ice sheets. Thus, improving the cloud mask in the APP dataset would be extremely helpful toward increasing the accuracy of the albedo and temperature retrievals, as well as extending the time-series of albedo and temperature retrievals from the more recent sensors to the historical ones. In this work, we use data mining methods to construct a model of MODIS channel 6 as a function of other channels that are common to both MODIS and AVHRR. The idea is to use the model to generate the equivalent of MODIS channel 6 for AVHRR as a function of the AVHRR equivalents to MODIS channels. We call this a Virtual Sensor because it predicts unmeasured spectra. The goal is to use this virtual channel 6. to yield a cloud mask superior to what is currently used in APP . Our results show that several data mining methods such as multilayer perceptrons (MLPs), ensemble methods (e.g., bagging), and kernel methods (e.g., support vector machines) generate channel 6 for unseen MODIS images with high accuracy. Because the true channel 6 is not available for AVHRR images, we qualitatively assess the virtual channel 6 for several AVHRR images.
An AVHRR Cloud Classification Database Typed by Experts
1993-10-01
analysis. Naval Research Laboratory, Monterey, CA. 110 pp. Gallaudet , Timothy C. and James J. Simpson, 1991: Automated cloud screening of AVHRR imagery...1987) and Saunders and Kriebel (1988a,b) have used threshold techniques to classify clouds. Gallaudet and Simpson (1991) have used split-and-merge
Huang, Lei; Kang, Wenjun; Bartom, Elizabeth; Onel, Kenan; Volchenboum, Samuel; Andrade, Jorge
2015-01-01
Whole exome sequencing has facilitated the discovery of causal genetic variants associated with human diseases at deep coverage and low cost. In particular, the detection of somatic mutations from tumor/normal pairs has provided insights into the cancer genome. Although there is an abundance of publicly-available software for the detection of germline and somatic variants, concordance is generally limited among variant callers and alignment algorithms. Successful integration of variants detected by multiple methods requires in-depth knowledge of the software, access to high-performance computing resources, and advanced programming techniques. We present ExScalibur, a set of fully automated, highly scalable and modulated pipelines for whole exome data analysis. The suite integrates multiple alignment and variant calling algorithms for the accurate detection of germline and somatic mutations with close to 99% sensitivity and specificity. ExScalibur implements streamlined execution of analytical modules, real-time monitoring of pipeline progress, robust handling of errors and intuitive documentation that allows for increased reproducibility and sharing of results and workflows. It runs on local computers, high-performance computing clusters and cloud environments. In addition, we provide a data analysis report utility to facilitate visualization of the results that offers interactive exploration of quality control files, read alignment and variant calls, assisting downstream customization of potential disease-causing mutations. ExScalibur is open-source and is also available as a public image on Amazon cloud. PMID:26271043
a Cloud-Based Architecture for Smart Video Surveillance
NASA Astrophysics Data System (ADS)
Valentín, L.; Serrano, S. A.; Oves García, R.; Andrade, A.; Palacios-Alonso, M. A.; Sucar, L. Enrique
2017-09-01
Turning a city into a smart city has attracted considerable attention. A smart city can be seen as a city that uses digital technology not only to improve the quality of people's life, but also, to have a positive impact in the environment and, at the same time, offer efficient and easy-to-use services. A fundamental aspect to be considered in a smart city is people's safety and welfare, therefore, having a good security system becomes a necessity, because it allows us to detect and identify potential risk situations, and then take appropriate decisions to help people or even prevent criminal acts. In this paper we present an architecture for automated video surveillance based on the cloud computing schema capable of acquiring a video stream from a set of cameras connected to the network, process that information, detect, label and highlight security-relevant events automatically, store the information and provide situational awareness in order to minimize response time to take the appropriate action.
BioBlend: automating pipeline analyses within Galaxy and CloudMan.
Sloggett, Clare; Goonasekera, Nuwan; Afgan, Enis
2013-07-01
We present BioBlend, a unified API in a high-level language (python) that wraps the functionality of Galaxy and CloudMan APIs. BioBlend makes it easy for bioinformaticians to automate end-to-end large data analysis, from scratch, in a way that is highly accessible to collaborators, by allowing them to both provide the required infrastructure and automate complex analyses over large datasets within the familiar Galaxy environment. http://bioblend.readthedocs.org/. Automated installation of BioBlend is available via PyPI (e.g. pip install bioblend). Alternatively, the source code is available from the GitHub repository (https://github.com/afgane/bioblend) under the MIT open source license. The library has been tested and is working on Linux, Macintosh and Windows-based systems.
NASA Astrophysics Data System (ADS)
Bitsakis, Theodoros; González-Lópezlira, R. A.; Bonfini, P.; Bruzual, G.; Maravelias, G.; Zaritsky, D.; Charlot, S.; Ramírez-Siordia, V. H.
2018-02-01
We present a new study of the spatial distribution and ages of the star clusters in the Small Magellanic Cloud (SMC). To detect and estimate the ages of the star clusters we rely on the new fully automated method developed by Bitsakis et al. Our code detects 1319 star clusters in the central 18 deg2 of the SMC we surveyed (1108 of which have never been reported before). The age distribution of those clusters suggests enhanced cluster formation around 240 Myr ago. It also implies significant differences in the cluster distribution of the bar with respect to the rest of the galaxy, with the younger clusters being predominantly located in the bar. Having used the same setup, and data from the same surveys as for our previous study of the LMC, we are able to robustly compare the cluster properties between the two galaxies. Our results suggest that the bulk of the clusters in both galaxies were formed approximately 300 Myr ago, probably during a direct collision between the two galaxies. On the other hand, the locations of the young (≤50 Myr) clusters in both Magellanic Clouds, found where their bars join the H I arms, suggest that cluster formation in those regions is a result of internal dynamical processes. Finally, we discuss the potential causes of the apparent outside-in quenching of cluster formation that we observe in the SMC. Our findings are consistent with an evolutionary scheme where the interactions between the Magellanic Clouds constitute the major mechanism driving their overall evolution.
Automated interpretation of 3D laserscanned point clouds for plant organ segmentation.
Wahabzada, Mirwaes; Paulus, Stefan; Kersting, Kristian; Mahlein, Anne-Katrin
2015-08-08
Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated - even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.
Near-Real-Time Cloud Auditing for Rapid Response
2013-10-01
cloud auditing , which provides timely evaluation results and rapid response, is the key to assuring the cloud. In this paper, we discuss security and...providers with possible automation of the audit , assertion, assessment, and assurance of their services. The Cloud Security Alliance (CSA [15]) was formed...monitoring tools, research literature, standards, and other resources related to IA (Information Assurance ) metrics and IT auditing . In the following
Normalized-Difference Snow Index (NDSI)
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Riggs, George A.
2010-01-01
The Normalized-Difference Snow Index (NDSI) has a long history. 'The use of ratioing visible (VIS) and near-infrared (NIR) or short-wave infrared (SWIR) channels to separate snow and clouds was documented in the literature beginning in the mid-1970s. A considerable amount of work on this subject was conducted at, and published by, the Air Force Geophysics Laboratory (AFGL). The objective of the AFGL work was to discriminate snow cover from cloud cover using an automated algorithm to improve global cloud analyses. Later, automated methods that relied on the VIS/NIR ratio were refined substantially using satellite data In this section we provide a brief history of the use of the NDSI for mapping snow cover.
CloudStackProjectsNContributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Roy
2017-07-12
Collection of applications and cloud templates. Project currently based on www.packer.io for automation, www.github.com/boxcutter templates and the www.github.com/csd-dev-tools/ClockworkVMs application for wrapping both of the above for easy creation of virtual systems. Will in future also contain cloud templates tuned for various services, applications and purposes.
NASA Astrophysics Data System (ADS)
Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-10-01
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
NASA Astrophysics Data System (ADS)
Ma, Hongchao; Cai, Zhan; Zhang, Liang
2018-01-01
This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
Estimating Water Levels with Google Earth Engine
NASA Astrophysics Data System (ADS)
Lucero, E.; Russo, T. A.; Zentner, M.; May, J.; Nguy-Robertson, A. L.
2016-12-01
Reservoirs serve multiple functions and are vital for storage, electricity generation, and flood control. For many areas, traditional ground-based reservoir measurements may not be available or data dissemination may be problematic. Consistent monitoring of reservoir levels in data-poor areas can be achieved through remote sensing, providing information to researchers and the international community. Estimates of trends and relative reservoir volume can be used to identify water supply vulnerability, anticipate low power generation, and predict flood risk. Image processing with automated cloud computing provides opportunities to study multiple geographic areas in near real-time. We demonstrate the prediction capability of a cloud environment for identifying water trends at reservoirs in the US, and then apply the method to data-poor areas in North Korea, Iran, Azerbaijan, Zambia, and India. The Google Earth Engine cloud platform hosts remote sensing data and can be used to automate reservoir level estimation with multispectral imagery. We combine automated cloud-based analysis from Landsat image classification to identify reservoir surface area trends and radar altimetry to identify reservoir level trends. The study estimates water level trends using three years of data from four domestic reservoirs to validate the remote sensing method, and five foreign reservoirs to demonstrate the method application. We report correlations between ground-based reservoir level measurements in the US and our remote sensing methods, and correlations between the cloud analysis and altimetry data for reservoirs in data-poor areas. The availability of regular satellite imagery and an automated, near real-time application method provides the necessary datasets for further temporal analysis, reservoir modeling, and flood forecasting. All statements of fact, analysis, or opinion are those of the author and do not reflect the official policy or position of the Department of Defense or any of its components or the U.S. Government
Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.
Cloud and aerosol studies using combined CPL and MAS data
NASA Astrophysics Data System (ADS)
Vaughan, Mark A.; Rodier, Sharon; Hu, Yongxiang; McGill, Matthew J.; Holz, Robert E.
2004-11-01
Current uncertainties in the role of aerosols and clouds in the Earth's climate system limit our abilities to model the climate system and predict climate change. These limitations are due primarily to difficulties of adequately measuring aerosols and clouds on a global scale. The A-train satellites (Aqua, CALIPSO, CloudSat, PARASOL, and Aura) will provide an unprecedented opportunity to address these uncertainties. The various active and passive sensors of the A-train will use a variety of measurement techniques to provide comprehensive observations of the multi-dimensional properties of clouds and aerosols. However, to fully achieve the potential of this ensemble requires a robust data analysis framework to optimally and efficiently map these individual measurements into a comprehensive set of cloud and aerosol physical properties. In this work we introduce the Multi-Instrument Data Analysis and Synthesis (MIDAS) project, whose goal is to develop a suite of physically sound and computationally efficient algorithms that will combine active and passive remote sensing data in order to produce improved assessments of aerosol and cloud radiative and microphysical properties. These algorithms include (a) the development of an intelligent feature detection algorithm that combines inputs from both active and passive sensors, and (b) identifying recognizable multi-instrument signatures related to aerosol and cloud type derived from clusters of image pixels and the associated vertical profile information. Classification of these signatures will lead to the automated identification of aerosol and cloud types. Testing of these new algorithms is done using currently existing and readily available active and passive measurements from the Cloud Physics Lidar and the MODIS Airborne Simulator, which simulate, respectively, the CALIPSO and MODIS A-train instruments.
Progress in Near Real-Time Volcanic Cloud Observations Using Satellite UV Instruments
NASA Astrophysics Data System (ADS)
Krotkov, N. A.; Yang, K.; Vicente, G.; Hughes, E. J.; Carn, S. A.; Krueger, A. J.
2011-12-01
Volcanic clouds from explosive eruptions can wreak havoc in many parts of the world, as exemplified by the 2010 eruption at the Eyjafjöll volcano in Iceland, which caused widespread disruption to air traffic and resulted in economic impacts across the globe. A suite of satellite-based systems offer the most effective means to monitor active volcanoes and to track the movement of volcanic clouds globally, providing critical information for aviation hazard mitigation. Satellite UV sensors, as part of this suite, have a long history of making unique near-real time (NRT) measurements of sulfur dioxide (SO2) and ash (aerosol Index) in volcanic clouds to supplement operational volcanic ash monitoring. Recently a NASA application project has shown that the use of near real-time (NRT,i.e., not older than 3 h) Aura/OMI satellite data produces a marked improvement in volcanic cloud detection using SO2 combined with Aerosol Index (AI) as a marker for ash. An operational online NRT OMI AI and SO2 image and data product distribution system was developed in collaboration with the NOAA Office of Satellite Data Processing and Distribution. Automated volcanic eruption alarms, and the production of volcanic cloud subsets for multiple regions are provided through the NOAA website. The data provide valuable information in support of the U.S. Federal Aviation Administration goal of a safe and efficient National Air Space. In this presentation, we will highlight the advantages of UV techniques and describe the advances in volcanic SO2 plume height estimation and enhanced volcanic ash detection using hyper-spectral UV measurements, illustrated with Aura/OMI observations of recent eruptions. We will share our plan to provide near-real-time volcanic cloud monitoring service using the Ozone Mapping and Profiler Suite (OMPS) on the Joint Polar Satellite System (JPSS).
Feature detection in satellite images using neural network technology
NASA Technical Reports Server (NTRS)
Augusteijn, Marijke F.; Dimalanta, Arturo S.
1992-01-01
A feasibility study of automated classification of satellite images is described. Satellite images were characterized by the textures they contain. In particular, the detection of cloud textures was investigated. The method of second-order gray level statistics, using co-occurrence matrices, was applied to extract feature vectors from image segments. Neural network technology was employed to classify these feature vectors. The cascade-correlation architecture was successfully used as a classifier. The use of a Kohonen network was also investigated but this architecture could not reliably classify the feature vectors due to the complicated structure of the classification problem. The best results were obtained when data from different spectral bands were fused.
The future point-of-care detection of disease and its data capture and handling.
Lopez-Barbosa, Natalia; Gamarra, Jorge D; Osma, Johann F
2016-04-01
Point-of-care detection is a widely studied area that attracts effort and interest from a large number of fields and companies. However, there is also increased interest from the general public in this type of device, which has driven enormous changes in the design and conception of these developments and the way data is handled. Therefore, future point-of-care detection has to include communication with front-end technology, such as smartphones and networks, automation of manufacture, and the incorporation of concepts like the Internet of Things (IoT) and cloud computing. Three key examples, based on different sensing technology, are analyzed in detail on the basis of these items to highlight a route for the future design and development of point-of-care detection devices and their data capture and handling.
Comparisons of GLM and LMA Observations
NASA Astrophysics Data System (ADS)
Thomas, R. J.; Krehbiel, P. R.; Rison, W.; Stanley, M. A.; Attanasio, A.
2017-12-01
Observations from 3-dimensional VHF lightning mapping arrays (LMAs) provide a valuable basis for evaluating the spatial accuracy and detection efficiencies of observations from the recently launched, optical-based Geosynchronous Lightning Mapper (GLM). In this presentation, we describe results of comparing the LMA and GLM observations. First, the observations are compared spatially and temporally at the individual event (pixel) level for sets of individual discharges. For LMA networks in Florida, Colorado, and Oklahoma, the GLM observations are well correlated time-wise with LMA observations but are systematically offset by one- to two pixels ( 10 to 15 or 20 km) in a southwesterly direction from the actual lightning activity. The graphical comparisons show a similar location uncertainty depending on the altitude at which the scattered light is emitted from the parent cloud, due to being observed at slant ranges. Detection efficiencies (DEs) can be accurately determined graphically for intervals where individual flashes in a storm are resolved time-wise, and DEs and false alarm rates can be automated using flash sorting algorithms for overall and/or larger storms. This can be done as a function of flash size and duration, and generally shows high detection rates for larger flashes. Preliminary results during the May 1 2017 ER-2 overflight of Colorado storms indicate decreased detection efficiency if the storm is obscured by an overlying cloud layer.
A hierarchical methodology for urban facade parsing from TLS point clouds
NASA Astrophysics Data System (ADS)
Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao
2017-01-01
The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.
Space station automation study-satellite servicing, volume 2
NASA Technical Reports Server (NTRS)
Meissinger, H. F.
1984-01-01
Technology requirements for automated satellite servicing operations aboard the NASA space station were studied. The three major tasks addressed: (1) servicing requirements (satellite and space station elements) and the role of automation; (2) assessment of automation technology; and (3) conceptual design of servicing facilities on the space station. It is found that many servicing functions cloud benefit from automation support; and the certain research and development activities on automation technologies for servicing should start as soon as possible. Also, some advanced automation developments for orbital servicing could be effectively applied to U.S. industrial ground based operations.
NASA Astrophysics Data System (ADS)
Heus, Thijs; Jonker, Harm J. J.; van den Akker, Harry E. A.; Griffith, Eric J.; Koutek, Michal; Post, Frits H.
2009-03-01
In this study, a new method is developed to investigate the entire life cycle of shallow cumuli in large eddy simulations. Although trained observers have no problem in distinguishing the different life stages of a cloud, this process proves difficult to automate, because cloud-splitting and cloud-merging events complicate the distinction between a single system divided in several cloudy parts and two independent systems that collided. Because the human perception is well equipped to capture and to make sense of these time-dependent three-dimensional features, a combination of automated constraints and human inspection in a three-dimensional virtual reality environment is used to select clouds that are exemplary in their behavior throughout their entire life span. Three specific cases (ARM, BOMEX, and BOMEX without large-scale forcings) are analyzed in this way, and the considerable number of selected clouds warrants reliable statistics of cloud properties conditioned on the phase in their life cycle. The most dominant feature in this statistical life cycle analysis is the pulsating growth that is present throughout the entire lifetime of the cloud, independent of the case and of the large-scale forcings. The pulses are a self-sustained phenomenon, driven by a balance between buoyancy and horizontal convergence of dry air. The convective inhibition just above the cloud base plays a crucial role as a barrier for the cloud to overcome in its infancy stage, and as a buffer region later on, ensuring a steady supply of buoyancy into the cloud.
Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis
NASA Astrophysics Data System (ADS)
Awrangjeb, M.; Fraser, C. S.; Lu, G.
2015-08-01
Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.
Imaging Systems for Size Measurements of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Shiotani, B.; Scruggs, T.; Toledo, R.; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.
2017-01-01
The overall objective of the DebriSat project is to provide data to update existing standard spacecraft breakup models. One of the key sets of parameters used in these models is the physical dimensions of the fragments (i.e., length, average-cross sectional area, and volume). For the DebriSat project, only fragments with at least one dimension greater than 2 mm are collected and processed. Additionally, a significant portion of the fragments recovered from the impact test are needle-like and/or flat plate-like fragments where their heights are almost negligible in comparison to their other dimensions. As a result, two fragment size categories were defined: 2D objects and 3D objects. While measurement systems are commercially available, factors such as measurement rates, system adaptability, size characterization limitations and equipment costs presented significant challenges to the project and a decision was made to develop our own size characterization systems. The size characterization systems consist of two automated image systems, one referred to as the 3D imaging system and the other as the 2D imaging system. Which imaging system to use depends on the classification of the fragment being measured. Both imaging systems utilize point-and-shoot cameras for object image acquisition and create representative point clouds of the fragments. The 3D imaging system utilizes a space-carving algorithm to generate a 3D point cloud, while the 2D imaging system utilizes an edge detection algorithm to generate a 2D point cloud. From the point clouds, the three largest orthogonal dimensions are determined using a convex hull algorithm. For 3D objects, in addition to the three largest orthogonal dimensions, the volume is computed via an alpha-shape algorithm applied to the point clouds. The average cross-sectional area is also computed for 3D objects. Both imaging systems have automated size measurements (image acquisition and image processing) driven by the need to quickly and accurately measure tens of thousands of debris fragments. Moreover, the automated size measurement reduces potential fragment damage/mishandling and ability for accuracy and repeatability. As the fragment characterization progressed, it became evident that the imaging systems had to be revised. For example, an additional view was added to the 2D imaging system to capture the height of the 2D object. This paper presents the DebriSat project's imaging systems and calculation techniques in detail; from design and development to maturation. The experiences and challenges are also shared.
NASA Technical Reports Server (NTRS)
Bjorklund, J. R.
1978-01-01
The cloud-rise preprocessor and multilayer diffusion computer programs were used by NASA in predicting concentrations and dosages downwind from normal and abnormal launches of rocket vehicles. These programs incorporated: (1) the latest data for the heat content and chemistry of rocket exhaust clouds; (2) provision for the automated calculation of surface water pH due to deposition of HCl from precipitation scavenging; (3) provision for automated calculation of concentration and dosage parameters at any level within the vertical grounds for which meteorological inputs have been specified; and (4) provision for execution of multiple cases of meteorological data. Procedures used to automatically calculate wind direction shear in a layer were updated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center propose a joint project. The goals are to enable scientific workflows of stakeholders to run on multiple cloud resources by use of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federat ion of Cloud Resources , and (c) High-Throughput Fabric Virtualization. This is a matching fund project in which Fermilab and KISTI will contribute equal resources .
Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan
2016-11-01
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data
NASA Astrophysics Data System (ADS)
Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.
2017-09-01
The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16 points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.
Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Popescu, S. C.; Putman, E.
2017-12-01
Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, "TreeVolX", was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 - 753 days. Pine and oak structural loss rates were characterized by estimating the amount of volumetric loss occurring in 20 equal-interval height bins of each SDT. Results showed that large pine snags exhibited more rapid structural loss in comparison to medium-sized oak snags in this study.
MPL-Net data products available at co-located AERONET sites and field experiment locations
NASA Astrophysics Data System (ADS)
Welton, E. J.; Campbell, J. R.; Berkoff, T. A.
2002-05-01
Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.
Door recognition in cluttered building interiors using imagery and lidar data
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Martínez-Sánchez, J.; Lagüela, S.; Armesto, J.; Khoshelham, K.
2014-06-01
Building indoors reconstruction is an active research topic due to the importance of the wide range of applications to which they can be subjected, from architecture and furniture design, to movies and video games editing, or even crime scene investigation. Among the constructive elements defining the inside of a building, doors are important entities in applications like routing and navigation, and their automated recognition is advantageous e.g. in case of large multi-storey buildings with many office rooms. The inherent complexity of the automation of the recognition process is increased by the presence of clutter and occlusions, difficult to avoid in indoor scenes. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors using information acquired in the form of point clouds and images. The methodology goes in depth with door detection and labelling as either opened, closed or furniture (false positive)
Cloud identification using genetic algorithms and massively parallel computation
NASA Technical Reports Server (NTRS)
Buckles, Bill P.; Petry, Frederick E.
1996-01-01
As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user's manual was written and distributed nationwide to scientists whose work might benefit from its availability. Several papers, including two journal articles, were produced.
Deep Learning for Discovery of Atmospheric Mountain Waves in MODIS and GPS Data
NASA Astrophysics Data System (ADS)
Pankratius, V.; Li, J. D.; Rude, C. M.; Gowanlock, M.; Herring, T.
2017-12-01
Airflow over mountains can produce gravity waves, called lee waves, which can generate atmospheric turbulence. Since this turbulence poses dangers to aviation, it is critical to identify such regions reliably in an automated fashion. This work leverages two sources of data to go beyond an ad-hoc human visual approach for such identification: MODIS imagery containing cloud patterns formed by lee waves, and patterns in GPS signals resulting from the transmission through atmospheric turbulence due to lee waves. We demonstrate a novel machine learning approach that fuses these two data types to detect atmospheric turbulence associated with lee waves. A convolutional neural network is trained on MODIS tile images to automatically classify the lee wave cloud patterns with 96% correct classifications on a validation set of 20,000 MODIS 64x64 tiles over a test region in the Sierra Nevada Mountains. Signals from GPS stations of the Plate Boundary Observatory are used for feature extraction related to lee waves, in order to improve the confidence of a detection in the MODIS imagery at a given position. To our knowledge, this is the first technique to combine these images and time series data types to improve the spatial and temporal resolutions for large-scale measurements of lee wave formations. First results of this work show great potential for improving weather condition monitoring, hazard and cloud pattern detection, as well as GPS navigation uncertainties. We acknowledge support from NASA AISTNNX15AG84G (PI Pankratius), NASA NNX14AQ03G (PI Herring), and NSF ACI1442997 (PI Pankratius).
Automated Studies of Continuing Current in Lightning Flashes
NASA Astrophysics Data System (ADS)
Martinez-Claros, Jose
Continuing current (CC) is a continuous luminosity in the lightning channel that lasts longer than 10 ms following a lightning return stroke to ground. Lightning flashes following CC are associated with direct damage to power lines and are thought to be responsible for causing lightning-induced forest fires. The development of an algorithm that automates continuing current detection by combining NLDN (National Lightning Detection Network) and LEFA (Langmuir Electric Field Array) datasets for CG flashes will be discussed. The algorithm was applied to thousands of cloud-to-ground (CG) flashes within 40 km of Langmuir Lab, New Mexico measured during the 2013 monsoon season. It counts the number of flashes in a single minute of data and the number of return strokes of an individual lightning flash; records the time and location of each return stroke; performs peak analysis on E-field data, and uses the slope of interstroke interval (ISI) E-field data fits to recognize whether continuing current (CC) exists within the interval. Following CC detection, duration and magnitude are measured. The longest observed C in 5588 flashes was 631 ms. The performance of the algorithm (vs. human judgement) was checked on 100 flashes. At best, the reported algorithm is "correct" 80% of the time, where correct means that multiple stations agree with each other and with a human on both the presence and duration of CC. Of the 100 flashes that were validated against human judgement, 62% were hybrid. Automated analysis detects the first but misses the second return stroke in many cases where the second return stroke is followed by long CC. This problem is also present in human interpretation of field change records.
NASA Astrophysics Data System (ADS)
Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying
2018-04-01
Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.
The State of Cloud-Based Biospecimen and Biobank Data Management Tools.
Paul, Shonali; Gade, Aditi; Mallipeddi, Sumani
2017-04-01
Biobanks are critical for collecting and managing high-quality biospecimens from donors with appropriate clinical annotation. The high-quality human biospecimens and associated data are required to better understand disease processes. Therefore, biobanks have become an important and essential resource for healthcare research and drug discovery. However, collecting and managing huge volumes of data (biospecimens and associated clinical data) necessitate that biobanks use appropriate data management solutions that can keep pace with the ever-changing requirements of research. To automate biobank data management, biobanks have been investing in traditional Laboratory Information Management Systems (LIMS). However, there are a myriad of challenges faced by biobanks in acquiring traditional LIMS. Traditional LIMS are cost-intensive and often lack the flexibility to accommodate changes in data sources and workflows. Cloud technology is emerging as an alternative that provides the opportunity to small and medium-sized biobanks to automate their operations in a cost-effective manner, even without IT personnel. Cloud-based solutions offer the advantage of heightened security, rapid scalability, dynamic allocation of services, and can facilitate collaboration between different research groups by using a shared environment on a "pay-as-you-go" basis. The benefits offered by cloud technology have resulted in the development of cloud-based data management solutions as an alternative to traditional on-premise software. After evaluating the advantages offered by cloud technology, several biobanks have started adopting cloud-based tools. Cloud-based tools provide biobanks with easy access to biospecimen data for real-time sharing with clinicians. Another major benefit realized by biobanks by implementing cloud-based applications is unlimited data storage on the cloud and automatic backups for protecting any data loss in the face of natural calamities.
Mobile phone imaging and cloud-based analysis for standardized malaria detection and reporting.
Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R
2016-06-27
Rapid diagnostic tests (RDTs) have been widely deployed in low-resource settings. These tests are typically read by visual inspection, and accurate record keeping and data aggregation remains a substantial challenge. A successful malaria elimination campaign will require new strategies that maximize the sensitivity of RDTs, reduce user error, and integrate results reporting tools. In this report, an unmodified mobile phone was used to photograph RDTs, which were subsequently uploaded into a globally accessible database, REDCap, and then analyzed three ways: with an automated image processing program, visual inspection, and a commercial lateral flow reader. The mobile phone image processing detected 20.6 malaria parasites/microliter of blood, compared to the commercial lateral flow reader which detected 64.4 parasites/microliter. Experienced observers visually identified positive malaria cases at 12.5 parasites/microliter, but encountered reporting errors and false negatives. Visual interpretation by inexperienced users resulted in only an 80.2% true negative rate, with substantial disagreement in the lower parasitemia range. We have demonstrated that combining a globally accessible database, such as REDCap, with mobile phone based imaging of RDTs provides objective, secure, automated, data collection and result reporting. This simple combination of existing technologies would appear to be an attractive tool for malaria elimination campaigns.
Mobile phone imaging and cloud-based analysis for standardized malaria detection and reporting
NASA Astrophysics Data System (ADS)
Scherr, Thomas F.; Gupta, Sparsh; Wright, David W.; Haselton, Frederick R.
2016-06-01
Rapid diagnostic tests (RDTs) have been widely deployed in low-resource settings. These tests are typically read by visual inspection, and accurate record keeping and data aggregation remains a substantial challenge. A successful malaria elimination campaign will require new strategies that maximize the sensitivity of RDTs, reduce user error, and integrate results reporting tools. In this report, an unmodified mobile phone was used to photograph RDTs, which were subsequently uploaded into a globally accessible database, REDCap, and then analyzed three ways: with an automated image processing program, visual inspection, and a commercial lateral flow reader. The mobile phone image processing detected 20.6 malaria parasites/microliter of blood, compared to the commercial lateral flow reader which detected 64.4 parasites/microliter. Experienced observers visually identified positive malaria cases at 12.5 parasites/microliter, but encountered reporting errors and false negatives. Visual interpretation by inexperienced users resulted in only an 80.2% true negative rate, with substantial disagreement in the lower parasitemia range. We have demonstrated that combining a globally accessible database, such as REDCap, with mobile phone based imaging of RDTs provides objective, secure, automated, data collection and result reporting. This simple combination of existing technologies would appear to be an attractive tool for malaria elimination campaigns.
NASA Astrophysics Data System (ADS)
LIU, Q.; Lv, Q.; Klucik, R.; Chen, C.; Gallaher, D. W.; Grant, G.; Shang, L.
2016-12-01
Due to the high volume and complexity of satellite data, computer-aided tools for fast quality assessments and scientific discovery are indispensable for scientists in the era of Big Data. In this work, we have developed a framework for automated anomalous event detection in massive satellite data. The framework consists of a clustering-based anomaly detection algorithm and a cloud-based tool for interactive analysis of detected anomalies. The algorithm is unsupervised and requires no prior knowledge of the data (e.g., expected normal pattern or known anomalies). As such, it works for diverse data sets, and performs well even in the presence of missing and noisy data. The cloud-based tool provides an intuitive mapping interface that allows users to interactively analyze anomalies using multiple features. As a whole, our framework can (1) identify outliers in a spatio-temporal context, (2) recognize and distinguish meaningful anomalous events from individual outliers, (3) rank those events based on "interestingness" (e.g., rareness or total number of outliers) defined by users, and (4) enable interactively query, exploration, and analysis of those anomalous events. In this presentation, we will demonstrate the effectiveness and efficiency of our framework in the application of detecting data quality issues and unusual natural events using two satellite datasets. The techniques and tools developed in this project are applicable for a diverse set of satellite data and will be made publicly available for scientists in early 2017.
NASA Astrophysics Data System (ADS)
Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria
2015-04-01
Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.
NASA Astrophysics Data System (ADS)
Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.
2016-02-01
Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.
2013-01-01
Background Besides the development of comprehensive tools for high-throughput 16S ribosomal RNA amplicon sequence analysis, there exists a growing need for protocols emphasizing alternative phylogenetic markers such as those representing eukaryotic organisms. Results Here we introduce CloVR-ITS, an automated pipeline for comparative analysis of internal transcribed spacer (ITS) pyrosequences amplified from metagenomic DNA isolates and representing fungal species. This pipeline performs a variety of steps similar to those commonly used for 16S rRNA amplicon sequence analysis, including preprocessing for quality, chimera detection, clustering of sequences into operational taxonomic units (OTUs), taxonomic assignment (at class, order, family, genus, and species levels) and statistical analysis of sample groups of interest based on user-provided information. Using ITS amplicon pyrosequencing data from a previous human gastric fluid study, we demonstrate the utility of CloVR-ITS for fungal microbiota analysis and provide runtime and cost examples, including analysis of extremely large datasets on the cloud. We show that the largest fractions of reads from the stomach fluid samples were assigned to Dothideomycetes, Saccharomycetes, Agaricomycetes and Sordariomycetes but that all samples were dominated by sequences that could not be taxonomically classified. Representatives of the Candida genus were identified in all samples, most notably C. quercitrusa, while sequence reads assigned to the Aspergillus genus were only identified in a subset of samples. CloVR-ITS is made available as a pre-installed, automated, and portable software pipeline for cloud-friendly execution as part of the CloVR virtual machine package (http://clovr.org). Conclusion The CloVR-ITS pipeline provides fungal microbiota analysis that can be complementary to bacterial 16S rRNA and total metagenome sequence analysis allowing for more comprehensive studies of environmental and host-associated microbial communities. PMID:24451270
NASA Astrophysics Data System (ADS)
Brumby, S. P.; Warren, M. S.; Keisler, R.; Chartrand, R.; Skillman, S.; Franco, E.; Kontgis, C.; Moody, D.; Kelton, T.; Mathis, M.
2016-12-01
Cloud computing, combined with recent advances in machine learning for computer vision, is enabling understanding of the world at a scale and at a level of space and time granularity never before feasible. Multi-decadal Earth remote sensing datasets at the petabyte scale (8×10^15 bits) are now available in commercial cloud, and new satellite constellations will generate daily global coverage at a few meters per pixel. Public and commercial satellite observations now provide a wide range of sensor modalities, from traditional visible/infrared to dual-polarity synthetic aperture radar (SAR). This provides the opportunity to build a continuously updated map of the world supporting the academic community and decision-makers in government, finanace and industry. We report on work demonstrating country-scale agricultural forecasting, and global-scale land cover/land, use mapping using a range of public and commercial satellite imagery. We describe processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work combining this imagery with time-series SAR collected by ESA Sentinel 1. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. We apply remote sensing science and machine learning algorithms to detect and classify agricultural crops and then estimate crop yields and detect threats to food security (e.g., flooding, drought). The software platform and analysis methodology also support monitoring water resources, forests and other general indicators of environmental health, and can detect growth and changes in cities that are displacing historical agricultural zones.
Bates, Maxwell; Berliner, Aaron J; Lachoff, Joe; Jaschke, Paul R; Groban, Eli S
2017-01-20
Wet Lab Accelerator (WLA) is a cloud-based tool that allows a scientist to conduct biology via robotic control without the need for any programming knowledge. A drag and drop interface provides a convenient and user-friendly method of generating biological protocols. Graphically developed protocols are turned into programmatic instruction lists required to conduct experiments at the cloud laboratory Transcriptic. Prior to the development of WLA, biologists were required to write in a programming language called "Autoprotocol" in order to work with Transcriptic. WLA relies on a new abstraction layer we call "Omniprotocol" to convert the graphical experimental description into lower level Autoprotocol language, which then directs robots at Transcriptic. While WLA has only been tested at Transcriptic, the conversion of graphically laid out experimental steps into Autoprotocol is generic, allowing extension of WLA into other cloud laboratories in the future. WLA hopes to democratize biology by bringing automation to general biologists.
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
NASA Astrophysics Data System (ADS)
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery
NASA Astrophysics Data System (ADS)
Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.
2018-04-01
Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.
Validation of satellite-based CI detection of convective storms via backward trajectories
NASA Astrophysics Data System (ADS)
Dietzsch, Felix; Senf, Fabian; Deneke, Hartwig
2013-04-01
Within this study, the rapid development and evolution of several severe convective events is investigated based on geostationary satellite images, and is related to previous findings on suitable detection thresholds for convective initiation. Nine severe events have been selected that occurred over Central Europe in summer 2012, and have been classified into the categories supercell, mesoscale convective system, frontal system and orographic convection. The cases are traced backward starting from the fully developed convective systems to its very beginning initial state using ECMWF data with 0.5 degree spatial resolution and 3h temporal resolution. For every case the storm life cycle was quantified through the storm's infrared (IR) brightness temperatures obtained from Meteosat Second Generation SEVIRI with 5 min temporal resolution and 4.5 km spatial resolution. In addition, cloud products including cloud optical thickness, cloud phase and effective droplet radius have been taken into account. A semi-automatic adjustment of the tracks within a search box was necessary to improve the tracking accuracy and thus the quality of the derived life-cycles. The combination of IR brightness temperatures, IR temperature time trends and satellite-based cloud products revealed different stages of storm development such as updraft intensification and glaciation well in most casesconfirming previously developed CI criteria from other studies. The vertical temperature gradient between 850 and 500 hPa, the Total-Totals-Index and the storm-relative helicity have been derived from ECMWF data and were used to characterize the storm synoptic environment. The results suggest that the storm-relative helicity also influences the life time of convective storms over Central Europe confirming previous studies. Tracking accuracy has shown to be a crucial issue in our study and a fully automated approach is required to enlarge the number of cases for significant statistics.
NASA Astrophysics Data System (ADS)
Hess, M. R.; Petrovic, V.; Kuester, F.
2017-08-01
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
Using All-Sky Imaging to Improve Telescope Scheduling (Abstract)
NASA Astrophysics Data System (ADS)
Cole, G. M.
2017-12-01
(Abstract only) Automated scheduling makes it possible for a small telescope to observe a large number of targets in a single night. But when used in areas which have less-than-perfect sky conditions such automation can lead to large numbers of observations of clouds and haze. This paper describes the development of a "sky-aware" telescope automation system that integrates the data flow from an SBIG AllSky340c camera with an enhanced dispatch scheduler to make optimum use of the available observing conditions for two highly instrumented backyard telescopes. Using the minute-by-minute time series image stream and a self-maintained reference database, the software maintains a file of sky brightness, transparency, stability, and forecasted visibility at several hundred grid positions. The scheduling software uses this information in real time to exclude targets obscured by clouds and select the best observing task, taking into account the requirements and limits of each instrument.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
Winter sky brightness and cloud cover at Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Moore, Anna M.; Yang, Yi; Fu, Jianning; Ashley, Michael C. B.; Cui, Xiangqun; Feng, Long Long; Gong, Xuefei; Hu, Zhongwen; Lawrence, Jon S.; Luong-Van, Daniel M.; Riddle, Reed; Shang, Zhaohui; Sims, Geoff; Storey, John W. V.; Tothill, Nicholas F. H.; Travouillon, Tony; Wang, Lifan; Yang, Huigen; Yang, Ji; Zhou, Xu; Zhu, Zhenxi
2013-01-01
At the summit of the Antarctic plateau, Dome A offers an intriguing location for future large scale optical astronomical observatories. The Gattini Dome A project was created to measure the optical sky brightness and large area cloud cover of the winter-time sky above this high altitude Antarctic site. The wide field camera and multi-filter system was installed on the PLATO instrument module as part of the Chinese-led traverse to Dome A in January 2008. This automated wide field camera consists of an Apogee U4000 interline CCD coupled to a Nikon fisheye lens enclosed in a heated container with glass window. The system contains a filter mechanism providing a suite of standard astronomical photometric filters (Bessell B, V, R) and a long-pass red filter for the detection and monitoring of airglow emission. The system operated continuously throughout the 2009, and 2011 winter seasons and part-way through the 2010 season, recording long exposure images sequentially for each filter. We have in hand one complete winter-time dataset (2009) returned via a manned traverse. We present here the first measurements of sky brightness in the photometric V band, cloud cover statistics measured so far and an estimate of the extinction.
Tests of Spectral Cloud Classification Using DMSP Fine Mode Satellite Data.
1980-06-02
processing techniques of potential value. Fourier spectral analysis was identified as the most promising technique to upgrade automated processing of...these measurements on the Earth’s surface is 0. 3 n mi. 3. Pickett, R.M., and Blackman, E.S. (1976) Automated Processing of Satellite Imagery Data at Air...and Pickett. R. Al. (1977) Automated Processing of Satellite Imagery Data at the Air Force Global Weather Central: Demonstrations of Spectral Analysis
Neutral hydrogen self-absorption in the Milky Way Galaxy
NASA Astrophysics Data System (ADS)
Kavars, Dain William
2006-06-01
To develop a better understanding of the cold neutral medium phase of the interstellar medium, we present a detailed analysis of neutral hydrogen self- absorption (HISA) clouds in the Milky Way Galaxy. These HISA clouds are in the Southern Galactic Plane Survey (SGPS), spanning the region l = 253°--358° and | b | <= 1.3°, and in the VLA Galactic Plane Survey (VGPS), spanning the region l = 18°--67° and | b | <= 1.3°--2.3°. The SGPS and VGPS have an angular resolution of ~1 arcminute and a velocity channel spacing of 0.82 km s -1 . With the recent completion of these surveys, we can study HISA features across the Galaxy at a much better resolution and sensitivity than any previous work. To analyze HISA in detail, catalogs of clouds of all sizes, including those undetectable by eye alone, are required. We present an automated search routine to detect all HISA clouds in the SGPS. We compare HISA to CO data and find some HISA clouds associated with CO, but others have no associated CO. This suggests that HISA clouds are in a transition between molecular and atomic gas, bridging the gap between dense molecular clouds and warmer, diffuse atomic clouds. HISA thus plays an important role in the overall evolution of the Galaxy. To study this transition further, we present observations of the OH molecule toward a select sample of HISA clouds in the VGPS, using the Green Bank Telescope (GBT). We present an analysis of the molecular properties of this sample, including a derivation of an OH to H 2 conversion factor and H 2 to H I abundance ratios. We discuss the complex relationship between H I, OH, 12 CO, and 13 CO emission. Finally we present a statistical analysis comparing HISA with infrared data from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) project. The GLIMPSE data reveal a large number of compact, dark infrared clouds believed to be in the early stages of star formation. If GLIMPSE clouds are associated with HISA, they provide valuable information on the evolution of HISA clouds.
Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang
2009-01-01
The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China’s first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3–11.3 μm; IR2, 11.5–12.5 μm and WV 6.3–7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products. PMID:22346714
Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang
2009-01-01
The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China's first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3-11.3 μm; IR2, 11.5-12.5 μm and WV 6.3-7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products.
NASA Astrophysics Data System (ADS)
Setiyono, T. D.; Holecz, F.; Khan, N. I.; Barbieri, M.; Quicho, E.; Collivignarelli, F.; Maunahan, A.; Gatti, L.; Romuga, G. C.
2017-01-01
Reliable and regular rice information is essential part of many countries’ national accounting process but the existing system may not be sufficient to meet the information demand in the context of food security and policy. Synthetic Aperture Radar (SAR) imagery is highly suitable for detecting lowland paddy rice, especially in tropical region where pervasive cloud cover in the rainy seasons limits the use of optical imagery. This study uses multi-temporal X-band and C-band SAR imagery, automated image processing, rule-based classification and field observations to classify rice in multiple locations across Tropical Asia and assimilate the information into ORYZA Crop Growth Simulation model (CGSM) to generate high resolution yield maps. The resulting cultivated rice area maps had classification accuracies above 85% and yield estimates were within 81-93% agreement against district level reported yields. The study sites capture much of the diversity in water management, crop establishment and rice maturity durations and the study demonstrates the feasibility of rice detection, yield monitoring, and damage assessment in case of climate disaster at national and supra-national scales using multi-temporal SAR imagery combined with CGSM and automated methods.
Boos, J; Meineke, A; Rubbert, C; Heusch, P; Lanzman, R S; Aissa, J; Antoch, G; Kröpil, P
2016-03-01
To implement automated CT dose data monitoring using the DICOM-Structured Report (DICOM-SR) in order to monitor dose-related CT data in regard to national diagnostic reference levels (DRLs). We used a novel in-house co-developed software tool based on the DICOM-SR to automatically monitor dose-related data from CT examinations. The DICOM-SR for each CT examination performed between 09/2011 and 03/2015 was automatically anonymized and sent from the CT scanners to a cloud server. Data was automatically analyzed in accordance with body region, patient age and corresponding DRL for volumetric computed tomography dose index (CTDIvol) and dose length product (DLP). Data of 36,523 examinations (131,527 scan series) performed on three different CT scanners and one PET/CT were analyzed. The overall mean CTDIvol and DLP were 51.3% and 52.8% of the national DRLs, respectively. CTDIvol and DLP reached 43.8% and 43.1% for abdominal CT (n=10,590), 66.6% and 69.6% for cranial CT (n=16,098) and 37.8% and 44.0% for chest CT (n=10,387) of the compared national DRLs, respectively. Overall, the CTDIvol exceeded national DRLs in 1.9% of the examinations, while the DLP exceeded national DRLs in 2.9% of the examinations. Between different CT protocols of the same body region, radiation exposure varied up to 50% of the DRLs. The implemented cloud-based CT dose monitoring based on the DICOM-SR enables automated benchmarking in regard to national DRLs. Overall the local dose exposure from CT reached approximately 50% of these DRLs indicating that DRL actualization as well as protocol-specific DRLs are desirable. The cloud-based approach enables multi-center dose monitoring and offers great potential to further optimize radiation exposure in radiological departments. • The newly developed software based on the DICOM-Structured Report enables large-scale cloud-based CT dose monitoring • The implemented software solution enables automated benchmarking in regard to national DRLs • The local radiation exposure from CT reached approximately 50 % of the national DRLs • The cloud-based approach offers great potential for multi-center dose analysis. © Georg Thieme Verlag KG Stuttgart · New York.
Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-01-01
Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313
Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-06-01
Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.
Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.
2012-07-01
The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.
Cloud Detection by Fusing Multi-Scale Convolutional Features
NASA Astrophysics Data System (ADS)
Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang
2018-04-01
Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.
Skyalert: a Platform for Event Understanding and Dissemination
NASA Astrophysics Data System (ADS)
Williams, Roy; Drake, A. J.; Djorgovski, S. G.; Donalek, C.; Graham, M. J.; Mahabal, A.
2010-01-01
Skyalert.org is an event repository, web interface, and event-oriented workflow architecture that can be used in many different ways for handling astronomical events that are encoded as VOEvent. It can be used as a remote application (events in the cloud) or installed locally. Some applications are: Dissemination of events with sophisticated discrimination (trigger), using email, instant message, RSS, twitter, etc; Authoring interface for survey-generated events, follow-up observations, and other event types; event streams can be put into the skyalert.org repository, either public or private, or into a local inbstallation of Skyalert; Event-driven software components to fetch archival data, for data-mining and classification of events; human interface to events though wiki, comments, and circulars; use of the "notices and circulars" model, where machines make the notices in real time and people write the interpretation later; Building trusted, automated decisions for automated follow-up observation, and the information infrastructure for automated follow-up with DC3 and HTN telescope schedulers; Citizen science projects such as artifact detection and classification; Query capability for past events, including correlations between different streams and correlations with existing source catalogs; Event metadata structures and connection to the global registry of the virtual observatory.
Finding Relevant Data in a Sea of Languages
2016-04-26
full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language
Processing of Cloud Databases for the Development of an Automated Global Cloud Climatology
1991-06-30
cloud amounts in each DOE grid box. The actual population values were coded into one- and two- digit codes primarily for printing purposes. For example...IPIALES 72652 43.07 -95.53 0423 PICKSTOWNE S.D. 80110 6.22 -75.60 1498 MEDELLIN 72424 37.90 -85.97 0233 FT. KNOX KY 80069 7.00 -74.72 0610 AMALFI...12 According to Lund, Grantham, and Davis (1980), the quality of the whole sky photographs used in producing the WSP digital data ensemble was
NASA Technical Reports Server (NTRS)
Andrefeouet, Serge; Robinson, Julie
2000-01-01
Coral reefs worldwide are suffering from severe and rapid degradation (Bryant et A, 1998; Hoegh-Guldberg, 1999). Quick, consistent, large-scale assessment is required to assess and monitor their status (e.g., USDOC/NOAA NESDIS et al., 1999). On-going systematic collection of high resolution digital satellite data will exhaustively complement the relatively small number of SPOT, Landsat 4-5, and IRS scenes acquired for coral reefs the last 20 years. The workhorse for current image acquisition is the Landsat 7 ETM+ Long Term Acquisition Plan (Gasch et al. 2000). Coral reefs are encountered in tropical areas and cloud contamination in satellite images is frequently a problem (Benner and Curry 1998), despite new automated techniques of cloud cover avoidance (Gasch and Campana 2000). Fusion of multidate acquisitions is a classical solution to solve the cloud problems. Though elegant, this solution is costly since multiple images must be purchased for one location; the cost may be prohibitive for institutions in developing countries. There are other difficulties associated with fusing multidate images as well. For example, water quality or surface state can significantly change through time in coral reef areas making the bathymetric processing of a mosaiced image strenuous. Therefore, another strategy must be selected to detect clouds and improve coral reefs mapping. Other supplemental data could be helpful and cost-effective for distinguishing clouds and generating the best possible reef maps in the shortest amount of time. Photographs taken from the 1960s to the present from the Space Shuttle and other human-occupied spacecraft are one under-used source of alternative multitemporal data (Lulla et al. 1996). Nearly 400,000 photographs have been acquired during this period, an estimated 28,000 of these taken to date are of potential value for reef remote sensing (Robinson et al. 2000a). The photographic images can be digitized into three bands (red, green and blue) and processed for various applications (e.g., Benner and Curry 1998, Nedeltchev 1999, Glasser and Lulla 2000, Robinson et al. 2000c, Webb et al, in press).
NASA Astrophysics Data System (ADS)
Karlsson, Karl-Göran; Håkansson, Nina
2018-02-01
The sensitivity in detecting thin clouds of the cloud screening method being used in the CM SAF cloud, albedo and surface radiation data set from AVHRR data (CLARA-A2) cloud climate data record (CDR) has been evaluated using cloud information from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) onboard the CALIPSO satellite. The sensitivity, including its global variation, has been studied based on collocations of Advanced Very High Resolution Radiometer (AVHRR) and CALIOP measurements over a 10-year period (2006-2015). The cloud detection sensitivity has been defined as the minimum cloud optical thickness for which 50 % of clouds could be detected, with the global average sensitivity estimated to be 0.225. After using this value to reduce the CALIOP cloud mask (i.e. clouds with optical thickness below this threshold were interpreted as cloud-free cases), cloudiness results were found to be basically unbiased over most of the globe except over the polar regions where a considerable underestimation of cloudiness could be seen during the polar winter. The overall probability of detecting clouds in the polar winter could be as low as 50 % over the highest and coldest parts of Greenland and Antarctica, showing that a large fraction of optically thick clouds also remains undetected here. The study included an in-depth analysis of the probability of detecting a cloud as a function of the vertically integrated cloud optical thickness as well as of the cloud's geographical position. Best results were achieved over oceanic surfaces at mid- to high latitudes where at least 50 % of all clouds with an optical thickness down to a value of 0.075 were detected. Corresponding cloud detection sensitivities over land surfaces outside of the polar regions were generally larger than 0.2 with maximum values of approximately 0.5 over the Sahara and the Arabian Peninsula. For polar land surfaces the values were close to 1 or higher with maximum values of 4.5 for the parts with the highest altitudes over Greenland and Antarctica. It is suggested to quantify the detection performance of other CDRs in terms of a sensitivity threshold of cloud optical thickness, which can be estimated using active lidar observations. Validation results are proposed to be used in Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulation Package (COSP) simulators for cloud detection characterization of various cloud CDRs from passive imagery.
NASA Astrophysics Data System (ADS)
Prins, Elaine M.; Feltz, Joleen M.; Menzel, W. Paul; Ward, Darold E.
1998-12-01
The launch of the eighth Geostationary Operational Environmental Satellite (GOES-8) in 1994 introduced an improved capability for diurnal fire and smoke monitoring throughout the western hemisphere. In South America the GOES-8 automated biomass burning algorithm (ABBA) and the automated smoke/aerosol detection algorithm (ASADA) are being used to monitor biomass burning. This paper outlines GOES-8 ABBA and ASADA development activities and summarizes results for the Smoke, Clouds, and Radiation in Brazil (SCAR-B) experiment and the 1995 fire season. GOES-8 ABBA results document the diurnal, spatial, and seasonal variability in fire activity throughout South America. A validation exercise compares GOES-8 ABBA results with ground truth measurements for two SCAR-B prescribed burns. GOES-8 ASADA aerosol coverage and derived albedo results provide an overview of the extent of daily and seasonal smoke coverage and relative intensities. Day-to-day variability in smoke extent closely tracks fluctuations in fire activity.
NASA Astrophysics Data System (ADS)
Kim, Hye-Won; Yeom, Jong-Min; Shin, Daegeun; Choi, Sungwon; Han, Kyung-Soo; Roujean, Jean-Louis
2017-08-01
In this study, a new assessment of thin cloud detection with the application of bidirectional reflectance distribution function (BRDF) model-based background surface reflectance was undertaken by interpreting surface spectra characterized using the Geostationary Ocean Color Imager (GOCI) over a land surface area. Unlike cloud detection over the ocean, the detection of cloud over land surfaces is difficult due to the complicated surface scattering characteristics, which vary among land surface types. Furthermore, in the case of thin clouds, in which the surface and cloud radiation are mixed, it is difficult to detect the clouds in both land and atmospheric fields. Therefore, to interpret background surface reflectance, especially underneath cloud, the semiempirical BRDF model was used to simulate surface reflectance by reflecting solar angle-dependent geostationary sensor geometry. For quantitative validation, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data were used to make a comparison with the proposed cloud masking result. As a result, the new cloud masking scheme resulted in a high probability of detection (POD = 0.82) compared with the Moderate Resolution Imaging Spectroradiometer (MODIS) (POD = 0.808) for all cloud cases. In particular, the agreement between the CALIPSO cloud product and new GOCI cloud mask was over 94% when detecting thin cloud (e.g., altostratus and cirrus) from January 2014 to June 2015. This result is relatively high in comparison with the result from the MODIS Collection 6 cloud mask product (MYD35).
Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing
NASA Technical Reports Server (NTRS)
Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane
2012-01-01
Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.
NASA Astrophysics Data System (ADS)
Choi, D. S.; Gierasch, P.; Banfield, D.; Showman, A.
2005-12-01
During the 28th orbit of Galileo in May 2000, the spacecraft imaged Jupiter's Great Red Spot (GRS) with a remarkable level of detail. Three observations of the vortex were made over a span of about two hours. We have produced mosaics of the GRS at each observation, and have measured the winds of the GRS using an automated algorithm that does not require manual cloud tracking. The advantage of using this method is the production of a high-density, regular grid of wind velocity vectors as compared to a limited number of scattered wind vectors that result from manual cloud tracking [1]. Using the wind velocity measurements, we are able to compute particle trajectories around the GRS as well as relative and absolute vorticities. We have also mapped turbulent eddies inside the chaotic central region of the GRS, similar to those tracked by Sada et al [2]. We calculate how absolute vorticity changes as a function of latitude along a trajectory around the GRS and compare these measurements to similar ones performed by Dowling and Ingersoll using Voyager imaging data [3]. Future projects with the automated cloud feature trackers will analyze Voyager images of the GRS as well as other high-resolution images of Jovian vortices. We also hope to apply this method to other relevant datasets on planetary atmospheres. References: [1] Legarreta, J. and Sanchez-Lavega, A. (2005) Icarus 174: 178--191. [2] Sada, P. et al. (1996) Icarus 119: 311--335. [3] Dowling, T. and Ingersoll, A. (1988) J. Atm. Sci. 45: 1380--1396.
NASA Technical Reports Server (NTRS)
Velden, Christopher
1995-01-01
The research objectives in this proposal were part of a continuing program at UW-CIMSS to develop and refine an automated geostationary satellite winds processing system which can be utilized in both research and operational environments. The majority of the originally proposed tasks were successfully accomplished, and in some cases the progress exceeded the original goals. Much of the research and development supported by this grant resulted in upgrades and modifications to the existing automated satellite winds tracking algorithm. These modifications were put to the test through case study demonstrations and numerical model impact studies. After being successfully demonstrated, the modifications and upgrades were implemented into the NESDIS algorithms in Washington DC, and have become part of the operational support. A major focus of the research supported under this grant attended to the continued development of water vapor tracked winds from geostationary observations. The fully automated UW-CIMSS tracking algorithm has been tuned to provide complete upper-tropospheric coverage from this data source, with data set quality close to that of operational cloud motion winds. Multispectral water vapor observations were collected and processed from several different geostationary satellites. The tracking and quality control algorithms were tuned and refined based on ground-truth comparisons and case studies involving impact on numerical model analyses and forecasts. The results have shown the water vapor motion winds are of good quality, complement the cloud motion wind data, and can have a positive impact in NWP on many meteorological scales.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds
NASA Astrophysics Data System (ADS)
Li, Rui; Chen, Lei; Li, Wen-Syan
Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.
NASA Astrophysics Data System (ADS)
Trepte, Qing; Minnis, Patrick; Sun-Mack, Sunny; Trepte, Charles
Clouds and aerosol play important roles in the global climate system. Accurately detecting their presence, altitude, and properties using satellite radiance measurements is a crucial first step in determining their influence on surface and top-of-atmosphere radiative fluxes. This paper presents a comparison analysis of a new version of the Clouds and Earth's Radiant Energy System (CERES) Edition 3 cloud detection algorithms using Aqua MODIS data with the recently released Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Version 2 Vertical Feature Mask (VFM). Improvements in CERES Edition 3 cloud mask include dust detection, thin cirrus tests, enhanced low cloud detection at night, and a smoother transition from mid-latitude to polar regions. For the CALIPSO Version 2 data set, changes to the lidar calibration can result in significant improvements to its identification of optically thick aerosol layers. The Aqua and CALIPSO satellites, part of the A-train satellite constellation, provide a unique opportunity for validating passive sensor cloud and aerosol detection using an active sensor. In this paper, individual comparison cases will be discussed for different types of clouds and aerosols over various surfaces, for daytime and nighttime conditions, and for regions ranging from the tropics to the poles. Examples will include an assessment of the CERES detection algorithm for optically thin cirrus, marine stratus, and polar night clouds as well as its ability to characterize Saharan dust plumes off the African coast. With the CALIPSO lidar's unique ability to probe the vertical structure of clouds and aerosol layers, it provides an excellent validation data set for cloud detection algorithms, especially for polar nighttime clouds.
Design of smart neonatal health monitoring system using SMCC
Mukherjee, Anwesha; Bhakta, Ishita
2016-01-01
Automated health monitoring and alert system development is a demanding research area today. Most of the currently available monitoring and controlling medical devices are wired which limits freeness of working environment. Wireless sensor network (WSN) is a better alternative in such an environment. Neonatal intensive care unit is used to take care of sick and premature neonates. Hypothermia is an independent risk factor for neonatal mortality and morbidity. To prevent it an automated monitoring system is required. In this Letter, an automated neonatal health monitoring system is designed using sensor mobile cloud computing (SMCC). SMCC is based on WSN and MCC. In the authors’ system temperature sensor, acceleration sensor and heart rate measurement sensor are used to monitor body temperature, acceleration due to body movement and heart rate of neonates. The sensor data are stored inside the cloud. The health person continuously monitors and accesses these data through the mobile device using an Android Application for neonatal monitoring. When an abnormal situation arises, an alert is generated in the mobile device of the health person. By alerting health professional using such an automated system, early care is provided to the affected babies and the probability of recovery is increased. PMID:28261491
Design of smart neonatal health monitoring system using SMCC.
De, Debashis; Mukherjee, Anwesha; Sau, Arkaprabha; Bhakta, Ishita
2017-02-01
Automated health monitoring and alert system development is a demanding research area today. Most of the currently available monitoring and controlling medical devices are wired which limits freeness of working environment. Wireless sensor network (WSN) is a better alternative in such an environment. Neonatal intensive care unit is used to take care of sick and premature neonates. Hypothermia is an independent risk factor for neonatal mortality and morbidity. To prevent it an automated monitoring system is required. In this Letter, an automated neonatal health monitoring system is designed using sensor mobile cloud computing (SMCC). SMCC is based on WSN and MCC. In the authors' system temperature sensor, acceleration sensor and heart rate measurement sensor are used to monitor body temperature, acceleration due to body movement and heart rate of neonates. The sensor data are stored inside the cloud. The health person continuously monitors and accesses these data through the mobile device using an Android Application for neonatal monitoring. When an abnormal situation arises, an alert is generated in the mobile device of the health person. By alerting health professional using such an automated system, early care is provided to the affected babies and the probability of recovery is increased.
NASA Astrophysics Data System (ADS)
Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.
2012-04-01
PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.
NASA Astrophysics Data System (ADS)
Cayula, Jean-François P.; May, Douglas A.; McKenzie, Bruce D.
2014-05-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Mask (VCM) Intermediate Product (IP) has been developed for use with Suomi National Polar-orbiting Partnership (NPP) VIIRS Environmental Data Record (EDR) products. In particular, the VIIRS Sea Surface Temperature (SST) EDR relies on VCM to identify cloud contaminated observations. Unfortunately, VCM does not appear to perform as well as cloud detection algorithms for SST. This may be due to similar but different goals of the two algorithms. VCM is concerned with detecting clouds while SST is interested in identifying clear observations. The result is that in undetermined cases VCM defaults to "clear," while the SST cloud detection defaults to "cloud." This problem is further compounded because classic SST cloud detection often flags as "cloud" all types of corrupted data, thus making a comparison with VCM difficult. The Naval Oceanographic Office (NAVOCEANO), which operationally produces a VIIRS SST product, relies on cloud detection from the NAVOCEANO Cloud Mask (NCM), adapted from cloud detection schemes designed for SST processing. To analyze VCM, the NAVOCEANO SST process was modified to attach the VCM flags to all SST retrievals. Global statistics are computed for both day and night data. The cases where NCM and/or VCM tag data as cloud-contaminated or clear can then be investigated. By analyzing the VCM individual test flags in conjunction with the status of NCM, areas where VCM can complement NCM are identified.
Trends and uncertainties in U.S. cloud cover from weather stations and satellite data
NASA Astrophysics Data System (ADS)
Free, M. P.; Sun, B.; Yoo, H. L.
2014-12-01
Cloud cover data from ground-based weather observers can be an important source of climate information, but the record of such observations in the U.S. is disrupted by the introduction of automated observing systems and other artificial shifts that interfere with our ability to assess changes in cloudiness at climate time scales. A new dataset using 54 National Weather Service (NWS) and 101 military stations that continued to make human-augmented cloud observations after the 1990s has been adjusted using statistical changepoint detection and visual scrutiny. The adjustments substantially reduce the trends in U.S. mean total cloud cover while increasing the agreement between the cloud cover time series and those of physically related climate variables such as diurnal temperature range and number of precipitation days. For 1949-2009, the adjusted time series give a trend in U.S. mean total cloud of 0.11 ± 0.22 %/decade for the military data, 0.55 ± 0.24 %/decade for the NWS data, and 0.31 ± 0.22 %/decade for the combined dataset. These trends are less than half those in the original data. For 1976-2004, the original data give a significant increase but the adjusted data show an insignificant trend of -0.17 (military stations) to 0.66 %/decade (NWS stations). The differences between the two sets of station data illustrate the uncertainties in the U.S. cloud cover record. We compare the adjusted station data to cloud cover time series extracted from several satellite datasets: ISCCP (International Satellite Cloud Climatology Project), PATMOS-x (AVHRR Pathfinder Atmospheres Extended) and CLARA-a1 (CM SAF cLoud Albedo and RAdiation), and the recently developed PATMOS-x diurnally corrected dataset. Like the station data, satellite cloud cover time series may contain inhomogeneities due to changes in the observing systems and problems with retrieval algorithms. Overall we find good agreement between interannual variability in most of the satellite data and that in our station data, with the diurnally corrected PATMOS-x product generally showing the best match. For the satellite period 1984-2007, trends in the U.S. mean cloud cover from satellite data vary widely among the datasets, and all are more negative than those in the station data, with PATMOS-x having the trends closest to those in the station data.
NASA Astrophysics Data System (ADS)
Yu, Tianxu; Rose, William I.; Prata, A. J.
2002-08-01
Volcanic ash in volcanic clouds can be mapped in two dimensions using two-band thermal infrared data available from meteorological satellites. Wen and Rose [1994] developed an algorithm that allows retrieval of the effective particle size, the optical depth of the volcanic cloud, and the mass of fine ash in the cloud. Both the mapping and the retrieval scheme are less accurate in the humid tropical atmosphere. In this study we devised and tested a scheme for atmospheric correction of volcanic ash mapping and retrievals. The scheme utilizes infrared (IR) brightness temperature (BT) information in two infrared channels (both between 10 and 12.5 μm) and the brightness temperature differences (BTD) to estimate the amount of BTD shift caused by lower tropospheric water vapor. It is supported by the moderate resolution transmission (MODTRAN) analysis. The discrimination of volcanic clouds in the new scheme also uses both BT and BTD data but corrects for the effects of the water vapor. The new scheme is demonstrated and compared with the old scheme using two well-documented examples: (1) the 18 August 1992 volcanic cloud of Crater Peak, Mount Spurr, Alaska, and (2) the 26 December 1997 volcanic cloud from Soufriere Hills, Montserrat. The Spurr example represents a relatively ``dry'' subarctic atmospheric condition. The new scheme sees a volcanic cloud that is about 50% larger than the old. The mean optical depth and effective radii of cloud particles are lower by 22% and 9%, and the fine ash mass in the cloud is 14% higher. The Montserrat cloud is much smaller than Spurr and is more sensitive to atmospheric moisture. It also was located in a moist tropical atmosphere. For the Montserrat example the new scheme shows larger differences, with the area of the volcanic cloud being about 5.5 times larger, the optical depth and effective radii of particles lower by 56% and 28%, and the total fine particle mass in the cloud increased by 53%. The new scheme can be automated and can contribute to more accurate remote volcanic ash detection. More tests are needed to find the best way to estimate the water vapor effects in real time.
Overview of MPLNET Version 3 Cloud Detection
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip
2016-01-01
The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.
Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-01-01
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565
Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-12-23
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).
Nelson, Kurtis; Steinwand, Daniel R.
2015-01-01
Annual disturbance maps are produced by the LANDFIRE program across the conterminous United States (CONUS). Existing LANDFIRE disturbance data from 1999 to 2010 are available and current efforts will produce disturbance data through 2012. A tiling and compositing approach was developed to produce bi-annual images optimized for change detection. A tiled grid of 10,000 × 10,000 30 m pixels was defined for CONUS and adjusted to consolidate smaller tiles along national borders, resulting in 98 non-overlapping tiles. Data from Landsat-5,-7, and -8 were re-projected to the tile extents, masked to remove clouds, shadows, water, and snow/ice, then composited using a cosine similarity approach. The resultant images were used in a change detection algorithm to determine areas of vegetation change. This approach enabled more efficient processing compared to using single Landsat scenes, by taking advantage of overlap between adjacent paths, and allowed an automated system to be developed for the entire process.
GOES Cloud Detection at the Global Hydrology and Climate Center
NASA Technical Reports Server (NTRS)
Laws, Kevin; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)
2002-01-01
The bi-spectral threshold (BTH) for cloud detection and height assignment is now operational at NASA's Global Hydrology and Climate Center (GHCC). This new approach is similar in principle to the bi-spectral spatial coherence (BSC) method with improvements made to produce a more robust cloud-filtering algorithm for nighttime cloud detection and subsequent 24-hour operational cloud top pressure assignment. The method capitalizes on cloud and surface emissivity differences from the GOES 3.9 and 10.7-micrometer channels to distinguish cloudy from clear pixels. Separate threshold values are determined for day and nighttime detection, and applied to a 20-day minimum composite difference image to better filter background effects and enhance differences in cloud properties. A cloud top pressure is assigned to each cloudy pixel by referencing the 10.7-micrometer channel temperature to a thermodynamic profile from a locally -run regional forecast model. This paper and supplemental poster will present an objective validation of nighttime cloud detection by the BTH approach in comparison with previous methods. The cloud top pressure will be evaluated by comparing to the NESDIS operational CO2 slicing approach.
A Case Study of Reverse Engineering Integrated in an Automated Design Process
NASA Astrophysics Data System (ADS)
Pescaru, R.; Kyratsis, P.; Oancea, G.
2016-11-01
This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.
NASA Technical Reports Server (NTRS)
Yost, Christopher R.; Minnis, Patrick; Trepte, Qing Z.; Palikonda, Rabindra; Ayers, Jeffrey K.; Spangenberg, Doulas A.
2012-01-01
With geostationary satellite data it is possible to have a continuous record of diurnal cycles of cloud properties for a large portion of the globe. Daytime cloud property retrieval algorithms are typically superior to nighttime algorithms because daytime methods utilize measurements of reflected solar radiation. However, reflected solar radiation is difficult to accurately model for high solar zenith angles where the amount of incident radiation is small. Clear and cloudy scenes can exhibit very small differences in reflected radiation and threshold-based cloud detection methods have more difficulty setting the proper thresholds for accurate cloud detection. Because top-of-atmosphere radiances are typically more accurately modeled outside the terminator region, information from previous scans can help guide cloud detection near the terminator. This paper presents an algorithm that uses cloud fraction and clear and cloudy infrared brightness temperatures from previous satellite scan times to improve the performance of a threshold-based cloud mask near the terminator. Comparisons of daytime, nighttime, and terminator cloud fraction derived from Geostationary Operational Environmental Satellite (GOES) radiance measurements show that the algorithm greatly reduces the number of false cloud detections and smoothes the transition from the daytime to the nighttime clod detection algorithm. Comparisons with the Geoscience Laser Altimeter System (GLAS) data show that using this algorithm decreases the number of false detections by approximately 20 percentage points.
Global cloud top height retrieval using SCIAMACHY limb spectra: model studies and first results
NASA Astrophysics Data System (ADS)
Eichmann, Kai-Uwe; Lelli, Luca; von Savigny, Christian; Sembhi, Harjinder; Burrows, John P.
2016-03-01
Cloud top heights (CTHs) are retrieved for the period 1 January 2003 to 7 April 2012 using height-resolved limb spectra measured with the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) on board ENVISAT (ENVIronmental SATellite). In this study, we present the retrieval code SCODA (SCIAMACHY cloud detection algorithm) based on a colour index method and test the accuracy of the retrieved CTHs in comparison to other methods. Sensitivity studies using the radiative transfer model SCIATRAN show that the method is capable of detecting cloud tops down to about 5 km and very thin cirrus clouds up to the tropopause. Volcanic particles can be detected that occasionally reach the lower stratosphere. Upper tropospheric ice clouds are observable for a nadir cloud optical thickness (COT) ≥ 0.01, which is in the subvisual range. This detection sensitivity decreases towards the lowermost troposphere. The COT detection limit for a water cloud top height of 5 km is roughly 0.1. This value is much lower than thresholds reported for passive cloud detection methods in nadir-viewing direction. Low clouds at 2 to 3 km can only be retrieved under very clean atmospheric conditions, as light scattering of aerosol particles interferes with the cloud particle scattering. We compare co-located SCIAMACHY limb and nadir cloud parameters that are retrieved with the Semi-Analytical CloUd Retrieval Algorithm (SACURA). Only opaque clouds (τN,c > 5) are detected with the nadir passive retrieval technique in the UV-visible and infrared wavelength ranges. Thus, due to the frequent occurrence of thin clouds and subvisual cirrus clouds in the tropics, larger CTH deviations are detected between both viewing geometries. Zonal mean CTH differences can be as high as 4 km in the tropics. The agreement in global cloud fields is sufficiently good. However, the land-sea contrast, as seen in nadir cloud occurrence frequency distributions, is not observed in limb geometry. Co-located cloud top height measurements of the limb-viewing Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on ENVISAT are compared for the period from January 2008 to March 2012. The global CTH agreement of about 1 km is observed, which is smaller than the vertical field of view of both instruments. Lower stratospheric aerosols from volcanic eruptions occasionally interfere with the cloud retrieval and inhibit the detection of tropospheric clouds. The aerosol impact on cloud retrievals was studied for the volcanoes Kasatochi (August 2008), Sarychev Peak (June 2009), and Nabro (June 2011). Long-lasting aerosol scattering is detected after these events in the Northern Hemisphere for heights above 12.5 km in tropical and polar latitudes. Aerosol top heights up to about 22 km are found in 2009 and the enhanced lower stratospheric aerosol layer persisted for about 7 months. In August 2009 about 82 % of the lower stratosphere between 30 and 70° N was filled with scattering particles and nearly 50 % in October 2008.
NASA Technical Reports Server (NTRS)
Kawamoto, Kazuaki; Minnis, Patrick; Smith, William L., Jr.
2001-01-01
One of the most perplexing problems in satellite cloud remote sensing is the overlapping of cloud layers. Although most techniques assume a 1-layer cloud system in a given retrieval of cloud properties, many observations are affected by radiation from more than one cloud layer. As such, cloud overlap can cause errors in the retrieval of many properties including cloud height, optical depth, phase, and particle size. A variety of methods have been developed to identify overlapped clouds in a given satellite imager pixel. Baum el al. (1995) used CO2 slicing and a spatial coherence method to demonstrate a possible analysis method for nighttime detection of multilayered clouds. Jin and Rossow (1997) also used a multispectral CO2 slicing technique for a global analysis of overlapped cloud amount. Lin et al. (1999) used a combination infrared, visible, and microwave data to detect overlapped clouds over water. Recently, Baum and Spinhirne (2000) proposed 1.6 and 11 microns. bispectral threshold method. While all of these methods have made progress in solving this stubborn problem, none have yet proven satisfactory for continuous and consistent monitoring of multilayer cloud systems. It is clear that detection of overlapping clouds from passive instruments such as satellite radiometers is in an immature stage of development and requires additional research. Overlapped cloud systems also affect the retrievals of cloud properties over the ARM domains (e.g., Minnis et al 1998) and hence should identified as accurately as possible. To reach this goal, it is necessary to determine which information can be exploited for detecting multilayered clouds from operational meteorological satellite data used by ARM. This paper examines the potential information available in spectral data available on the Geostationary Operational Environmental Satellite (GOES) imager and the NOAA Advanced Very High Resolution Radiometer (AVHRR) used over the ARM SGP and NSA sites to study the capability of detecting overlapping clouds
NASA Technical Reports Server (NTRS)
Kawamoto, K.; Minnis, P.; Smith, W. L., Jr.
2001-01-01
One of the most perplexing problems in satellite cloud remote sensing is the overlapping of cloud layers. Although most techniques assume a one layer cloud system in a given retrieval of cloud properties, many observations are affected by radiation from more than one cloud layer. As such, cloud overlap can cause errors in the retrieval of many properties including cloud height, optical depth, phase, and particle size. A variety of methods have been developed to identify overlapped clouds in a given satellite imager pixel. Baum et al used CO2 slicing and a spatial coherence method to demonstrate a possible analysis method for nighttime detection of multilayered clouds. Jin and Rossow also used a multispectral CO2 slicing technique for a global analysis of overlapped cloud amount. Lin et al. used a combination infrared (IR), visible (VIS), and microwave data to detect overlapped clouds over water. Recently, Baum and Spinhirne proposed a 1.6 and 11 micron bispectral threshold method. While all of these methods have made progress in solving this stubborn problem none have yet proven satisfactory for continuous and consistent monitoring of multilayer cloud systems. It is clear that detection of overlapping clouds from passive instruments such as satellite radiometers is in an immature stage of development and requires additional research. Overlapped cloud systems also affect the retrievals of cloud properties over the Atmospheric Radiation Measurement (ARM) domains and hence should be identified as accurately as possible. To reach this goal, it is necessary to determine which information can be exploited for detecting multilayered clouds from operational meteorological satellite data used by ARM. This paper examines the potential information available in spectral data available on the Geostationary Operational Environmental Satellite (GOES) imager and the National Oceanic Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) used over the ARM Program's Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites to study the capability of detecting overlapping clouds.
Lightning forecasting studies using LDAR, LLP, field mill, surface mesonet, and Doppler radar data
NASA Technical Reports Server (NTRS)
Forbes, Gregory S.; Hoffert, Steven G.
1995-01-01
The ultimate goal of this research is to develop rules, algorithms, display software, and training materials that can be used by the operational forecasters who issue weather advisories for daily ground operations and launches by NASA and the United States Air Force to improve real-time forecasts of lightning. Doppler radar, Lightning Detection and Ranging (LDAR), Lightning Location and Protection (LLP), field mill (Launch Pad Lightning Warning System -- LPLWS), wind tower (surface mesonet) and additional data sets have been utilized in 10 case studies of thunderstorms in the vicinity of KSC during the summers of 1994 and 1995. These case studies reveal many intriguing aspects of cloud-to-ground, cloud-to-cloud, in-cloud, and cloud-to-air lightning discharges in relation to radar thunderstorm structure and evolution. They also enable the formulation of some preliminary working rules of potential use in the forecasting of initial and final ground strike threat. In addition, LDAR and LLP data sets from 1993 have been used to quantify the lightning threat relative to the center and edges of LDAR discharge patterns. Software has been written to overlay and display the various data sets as color imagery. However, human intervention is required to configure the data sets for proper intercomparison. Future efforts will involve additional software development to automate the data set intercomparisons, to display multiple overlay combinations in a windows format, and to allow for animation of the imagery. The software package will then be used as a tool to examine more fully the current cases and to explore additional cases in a timely manner. This will enable the formulation of more general and reliable forecasting guidelines and rules.
NASA Technical Reports Server (NTRS)
Brubaker, N.; Jedlovec, G. J.
2004-01-01
With the preliminary release of AIRS Level 1 and 2 data to the scientific community, there is a growing need for an accurate AIRS cloud mask for data assimilation studies and in producing products derived from cloud free radiances. Current cloud information provided with the AIRS data are limited or based on simplified threshold tests. A multispectral cloud detection approach has been developed for AIRS that utilizes the hyper-spectral capabilities to detect clouds based on specific cloud signatures across the short wave and long wave infrared window regions. This new AIRS cloud mask has been validated against the existing AIRS Level 2 cloud product and cloud information derived from MODIS. Preliminary results for both day and night applications over the continental U.S. are encouraging. Details of the cloud detection approach and validation results will be presented at the conference.
Correlation Filters for Detection of Cellular Nuclei in Histopathology Images.
Ahmad, Asif; Asif, Amina; Rajpoot, Nasir; Arif, Muhammad; Minhas, Fayyaz Ul Amir Afsar
2017-11-21
Nuclei detection in histology images is an essential part of computer aided diagnosis of cancers and tumors. It is a challenging task due to diverse and complicated structures of cells. In this work, we present an automated technique for detection of cellular nuclei in hematoxylin and eosin stained histopathology images. Our proposed approach is based on kernelized correlation filters. Correlation filters have been widely used in object detection and tracking applications but their strength has not been explored in the medical imaging domain up till now. Our experimental results show that the proposed scheme gives state of the art accuracy and can learn complex nuclear morphologies. Like deep learning approaches, the proposed filters do not require engineering of image features as they can operate directly on histopathology images without significant preprocessing. However, unlike deep learning methods, the large-margin correlation filters developed in this work are interpretable, computationally efficient and do not require specialized or expensive computing hardware. A cloud based webserver of the proposed method and its python implementation can be accessed at the following URL: http://faculty.pieas.edu.pk/fayyaz/software.html#corehist .
NASA Astrophysics Data System (ADS)
Traverso, A.; Lopez Torres, E.; Fantacci, M. E.; Cerello, P.
2017-05-01
Lung cancer is one of the most lethal types of cancer, because its early diagnosis is not good enough. In fact, the detection of pulmonary nodule, potential lung cancers, in Computed Tomography scans is a very challenging and time-consuming task for radiologists. To support radiologists, researchers have developed Computer-Aided Diagnosis (CAD) systems for the automated detection of pulmonary nodules in chest Computed Tomography scans. Despite the high level of technological developments and the proved benefits on the overall detection performance, the usage of Computer-Aided Diagnosis in clinical practice is far from being a common procedure. In this paper we investigate the causes underlying this discrepancy and present a solution to tackle it: the M5L WEB- and Cloud-based on-demand Computer-Aided Diagnosis. In addition, we prove how the combination of traditional imaging processing techniques with state-of-art advanced classification algorithms allows to build a system whose performance could be much larger than any Computer-Aided Diagnosis developed so far. This outcome opens the possibility to use the CAD as clinical decision support for radiologists.
NASA Technical Reports Server (NTRS)
Wind, Galina (Gala); Platnick, Steven; Riedi, Jerome
2011-01-01
The MODIS cloud optical properties algorithm (MOD06IMYD06 for Terra and Aqua MODIS, respectively) slated for production in Data Collection 6 has been adapted to execute using available channels on MSG SEVIRI. Available MODIS-style retrievals include IR Window-derived cloud top properties, using the new Collection 6 cloud top properties algorithm, cloud optical thickness from VISINIR bands, cloud effective radius from 1.6 and 3.7Jlm and cloud ice/water path. We also provide pixel-level uncertainty estimate for successful retrievals. It was found that at nighttime the SEVIRI cloud mask tends to report unnaturally low cloud fraction for marine stratocumulus clouds. A correction algorithm that improves detection of such clouds has been developed. We will discuss the improvements to nighttime low cloud detection for SEVIRI and show examples and comparisons with MODIS and CALIPSO. We will also show examples of MODIS-style pixel-level (Level-2) cloud retrievals for SEVIRI with comparisons to MODIS.
First Steps to Automated Interior Reconstruction from Semantically Enriched Point Clouds and Imagery
NASA Astrophysics Data System (ADS)
Obrock, L. S.; Gülch, E.
2018-05-01
The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building. We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2 % and a mean Intersection over Union of 44.2 %. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.
NASA Astrophysics Data System (ADS)
Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier
2015-03-01
With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.
NASA Technical Reports Server (NTRS)
Holub, R.; Shenk, W. E.
1973-01-01
Four registered channels (0.2 to 4, 6.5 to 7, 10 to 11, and 20 to 23 microns) of the Nimbus 3 Medium Resolution Infrared Radiometer (MRIR) were used to study 24-hr changes in the structure of an extratropical cyclone during a 6-day period in May 1969. Use of a stereographic-horizon map projection insured that the storm was mapped with a single perspective throughout the series and allowed the convenient preparation of 24-hr difference maps of the infrared radiation fields. Single-channel and multispectral analysis techniques were employed to establish the positions and vertical slopes of jetstreams, large cloud systems, and major features of middle and upper tropospheric circulation. Use of these techniques plus the difference maps and continuity of observation allowed the early detection of secondary cyclones developing within the circulation of the primary cyclone. An automated, multispectral cloud-type identification technique was developed, and comparisons that were made with conventional ship reports and with high-resolution visual data from the image dissector camera system showed good agreement.
Evaluate ERTS imagery for mapping and detection of changes of snowcover on land and on glaciers
NASA Technical Reports Server (NTRS)
Meier, M. F. (Principal Investigator)
1972-01-01
The author has identified the following significant results. Preliminary results on the feasibility of mapping snow cover extent have been obtained from a limited number of ERTS-1 images of mountains in Alaska, British Columbia, and Washington. The snowline on land can be readily distinguished, except in heavy forest where such distinction appears to be virtually impossible. The snowline on very large glaciers can also be distinguished remarkably easily, leading to a convenient way to measure glacier accumulation area ratios or equilibrium line altitude. Monitoring of large surging glaciers appears to be possible, but only through observation of a change in area and/or medial moraine extent. Under certain conditions, ERTS-1 imagery appears to have high potential for mapping snow cover in mountainous areas. Distinction between snow and clouds appears to require use of the human eye, but in a cloud-free scene the snow cover is sufficiently distinct to allow use of automated techniques. This technique may prove very useful as an aid in the monitoring of the snowpack water resource and the prediction of summer snowmelt runoff volume.
Through thick and thin: quantitative classification of photometric observing conditions on Paranal
NASA Astrophysics Data System (ADS)
Kerber, Florian; Querel, Richard R.; Neureiter, Bianca; Hanuschik, Reinhard
2016-07-01
A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer is used to monitor sky conditions over ESO's Paranal observatory. It provides measurements of precipitable water vapour (PWV) at 183 GHz, which are being used in Service Mode for scheduling observations that can take advantage of favourable conditions for infrared (IR) observations. The instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. It is capable of detecting cold and thin, even sub-visual, cirrus clouds. We present a diagnostic diagram that, based on a sophisticated time series analysis of these IR sky brightness data, allows for the automatic and quantitative classification of photometric observing conditions over Paranal. The method is highly sensitive to the presence of even very thin clouds but robust against other causes of sky brightness variations. The diagram has been validated across the complete range of conditions that occur over Paranal and we find that the automated process provides correct classification at the 95% level. We plan to develop our method into an operational tool for routine use in support of ESO Science Operations.
In vivo real-time cavitation imaging in moving organs
NASA Astrophysics Data System (ADS)
Arnal, B.; Baranger, J.; Demene, C.; Tanter, M.; Pernot, M.
2017-02-01
The stochastic nature of cavitation implies visualization of the cavitation cloud in real-time and in a discriminative manner for the safe use of focused ultrasound therapy. This visualization is sometimes possible with standard echography, but it strongly depends on the quality of the scanner, and is hindered by difficulty in discriminating from highly reflecting tissue signals in different organs. A specific approach would then permit clear validation of the cavitation position and activity. Detecting signals from a specific source with high sensitivity is a major problem in ultrasound imaging. Based on plane or diverging wave sonications, ultrafast ultrasonic imaging dramatically increases temporal resolution, and the larger amount of acquired data permits increased sensitivity in Doppler imaging. Here, we investigate a spatiotemporal singular value decomposition of ultrafast radiofrequency data to discriminate bubble clouds from tissue based on their different spatiotemporal motion and echogenicity during histotripsy. We introduce an automation to determine the parameters of this filtering. This method clearly outperforms standard temporal filtering techniques with a bubble to tissue contrast of at least 20 dB in vitro in a moving phantom and in vivo in porcine liver.
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
An efficient cloud detection method for high resolution remote sensing panchromatic imagery
NASA Astrophysics Data System (ADS)
Li, Chaowei; Lin, Zaiping; Deng, Xinpu
2018-04-01
In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.
QCloud: A cloud-based quality control system for mass spectrometry-based proteomics laboratories
Chiva, Cristina; Olivella, Roger; Borràs, Eva; Espadas, Guadalupe; Pastor, Olga; Solé, Amanda
2018-01-01
The increasing number of biomedical and translational applications in mass spectrometry-based proteomics poses new analytical challenges and raises the need for automated quality control systems. Despite previous efforts to set standard file formats, data processing workflows and key evaluation parameters for quality control, automated quality control systems are not yet widespread among proteomics laboratories, which limits the acquisition of high-quality results, inter-laboratory comparisons and the assessment of variability of instrumental platforms. Here we present QCloud, a cloud-based system to support proteomics laboratories in daily quality assessment using a user-friendly interface, easy setup, automated data processing and archiving, and unbiased instrument evaluation. QCloud supports the most common targeted and untargeted proteomics workflows, it accepts data formats from different vendors and it enables the annotation of acquired data and reporting incidences. A complete version of the QCloud system has successfully been developed and it is now open to the proteomics community (http://qcloud.crg.eu). QCloud system is an open source project, publicly available under a Creative Commons License Attribution-ShareAlike 4.0. PMID:29324744
An automated cirrus classification
NASA Astrophysics Data System (ADS)
Gryspeerdt, Edward; Quaas, Johannes; Sourdeval, Odran; Goren, Tom
2017-04-01
Cirrus clouds play an important role in determining the radiation budget of the earth, but our understanding of the lifecycle and controls on cirrus clouds remains incomplete. Cirrus clouds can have very different properties and development depending on their environment, particularly during their formation. However, the relevant factors often cannot be distinguished using commonly retrieved satellite data products (such as cloud optical depth). In particular, the initial cloud phase has been identified as an important factor in cloud development, but although back-trajectory based methods can provide information on the initial cloud phase, they are computationally expensive and depend on the cloud parametrisations used in re-analysis products. In this work, a classification system (Identification and Classification of Cirrus, IC-CIR) is introduced. Using re-analysis and satellite data, cirrus clouds are separated in four main types: frontal, convective, orographic and in-situ. The properties of these classes show that this classification is able to provide useful information on the properties and initial phase of cirrus clouds, information that could not be provided by instantaneous satellite retrieved cloud properties alone. This classification is designed to be easily implemented in global climate models, helping to improve future comparisons between observations and models and reducing the uncertainty in cirrus clouds properties, leading to improved cloud parametrisations.
Estimation of Cirrus and Stratus Cloud Heights Using Landsat Imagery
NASA Technical Reports Server (NTRS)
Inomata, Yasushi; Feind, R. E.; Welch, R. M.
1996-01-01
A new method based upon high-spatial-resolution imagery is presented that matches cloud and shadow regions to estimate cirrus and stratus cloud heights. The distance between the cloud and the matching shadow pattern is accomplished using the 2D cross-correlation function from which the cloud height is derived. The distance between the matching cloud-shadow patterns is verified manually. The derived heights also are validated through comparison with a temperature-based retrieval of cloud height. It is also demonstrated that an estimate of cloud thickness can be retrieved if both the sunside and anti-sunside of the cloud-shadow pair are apparent. The technique requires some intepretation to determine the cloud height level retrieved (i.e., the top, base, or mid-level). It is concluded that the method is accurate to within several pixels, equivalent to cloud height variations of about +/- 250 m. The results show that precise placement of the templates is unnecessary, so that the development of a semi-automated procedure is possible. Cloud templates of about 64 pixels on a side or larger produce consistent results. The procedure was repeated for imagery degraded to simulate lower spatial resolutions. The results suggest that spatial resolution of 150-200 m or better is necessary in order to obtain stable cloud height retrievals.
iCHRCloud: Web & Mobile based Child Health Imprints for Smart Healthcare.
Singh, Harpreet; Mallaiah, Raghuram; Yadav, Gautam; Verma, Nitin; Sawhney, Ashu; Brahmachari, Samir K
2017-11-29
Reducing child mortality with quality care is the prime-most concern of all nations. Thus in current IT era, our healthcare industry needs to focus on adapting information technology in healthcare services. Barring few preliminary attempts to digitalize basic hospital administrative and clinical functions, even today in India, child health and vaccination records are still maintained as paper-based records. Also, error in manually plotting the parameters in growth charts results in missed opportunities for early detection of growth disorders in children. To address these concerns, we present India's first hospital linked, affordable automated vaccination and real-time child's growth monitoring cloud based application- Integrated Child Health Record cloud (iCHRcloud). This application is based on HL7 protocol enabling integration with hospital's HIS/EMR system. It provides Java (Enterprise Service Bus and Hibernate) based web portal for doctors and mobile application for parents, enhancing doctor-parent engagement. It leverages highchart to automate chart preparation and provides access of data via Push Notification (GCM and APNS) to parents on iOS and Android mobile platforms. iCHRcloud has also been recognized as one of the best innovative solution in three nationwide challenges, 2016 in India. iCHRcloud offers a seamless, secure (256 bit HTTPS) and sustainable solution to reduce child mortality. Detail analysis on preliminary data of 16,490 child health records highlight the diversified need of various demographic regions. Thus, primary lesson would be to implement better validation strategies to fulfill the customize requisites of entire population. This paper presents first glimpse of data and power of the analytics in policy framework.
Forest Cover Mapping in Iskandar Malaysia Using Satellite Data
NASA Astrophysics Data System (ADS)
Kanniah, K. D.; Mohd Najib, N. E.; Vu, T. T.
2016-09-01
Malaysia is the third largest country in the world that had lost forest cover. Therefore, timely information on forest cover is required to help the government to ensure that the remaining forest resources are managed in a sustainable manner. This study aims to map and detect changes of forest cover (deforestation and disturbance) in Iskandar Malaysia region in the south of Peninsular Malaysia between years 1990 and 2010 using Landsat satellite images. The Carnegie Landsat Analysis System-Lite (CLASlite) programme was used to classify forest cover using Landsat images. This software is able to mask out clouds, cloud shadows, terrain shadows, and water bodies and atmospherically correct the images using 6S radiative transfer model. An Automated Monte Carlo Unmixing technique embedded in CLASlite was used to unmix each Landsat pixel into fractions of photosynthetic vegetation (PV), non photosynthetic vegetation (NPV) and soil surface (S). Forest and non-forest areas were produced from the fractional cover images using appropriate threshold values of PV, NPV and S. CLASlite software was found to be able to classify forest cover in Iskandar Malaysia with only a difference between 14% (1990) and 5% (2010) compared to the forest land use map produced by the Department of Agriculture, Malaysia. Nevertheless, the CLASlite automated software used in this study was found not to exclude other vegetation types especially rubber and oil palm that has similar reflectance to forest. Currently rubber and oil palm were discriminated from forest manually using land use maps. Therefore, CLASlite algorithm needs further adjustment to exclude these vegetation and classify only forest cover.
An automated extinction and sky brightness monitor for the Indian Astronomical Observatory, Hanle
NASA Astrophysics Data System (ADS)
Sharma, Tarun Kumar; Parihar, Padmakar; Banyal, R. K.; Dar, Ajaz Ahmad; Kemkar, P. M. M.; Stanzin, Urgain; Anupama, G. C.
2017-09-01
We have developed a simple and portable device that makes precise and automated measurements of night sky extinction. Our instrument uses a commercially available telephoto lens for light collection, which is retrofitted to a custom-built telescope mount, a thermoelectrically cooled CCD for imaging, and a compact enclosure with electronic control to facilitate remote observations. The instrument is also capable of measuring the sky brightness and detecting the presence of thin clouds that otherwise would remain unnoticed. The measurements of sky brightness made by our simple device are more accurate than those made using a large telescope. Another capability of the device is that it can provide an instantaneous measurement of atmospheric extinction, which is extremely useful for exploring the nature of short-term extinction variation. The instrument was designed and developed primarily in order to characterize and investigate thoroughly the Indian Astronomical Observatory (IAO), Hanle for the establishment of India's future large-telescope project. The device was installed at the IAO, Hanle in 2014 May. In this paper, we present the instrument details and discuss the results of extinction data collected for about 250 nights.
Atmospheric Science Data Center
2013-04-19
... view. The cloud height map was produced by automated computer recognition of the distinctive spatial features between images ... NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, D.C. The Terra spacecraft is managed ...
NASA Astrophysics Data System (ADS)
Trepte, Q. Z.; Minnis, P.; Palikonda, R.; Bedka, K. M.; Sun-Mack, S.
2011-12-01
Accurate detection of cloud amount and distribution using satellite observations is crucial in determining cloud radiative forcing and earth energy budget. The CERES-MODIS (CM) Edition 4 cloud mask is a global cloud detection algorithm for application to Terra and Aqua MODIS data with the aid of other ancillary data sets. It is used operationally for the NASA's Cloud and Earth's Radiant Energy System (CERES) project. The LaRC AVHRR cloud mask, which uses only five spectral channels, is based on a subset of the CM cloud mask which employs twelve MODIS channels. The LaRC mask is applied to AVHRR data for the NOAA Climate Data Record Program. Comparisons among the CM Ed4, and LaRC AVHRR cloud masks and the CALIPSO Vertical Feature Mask (VFM) constitute a powerful means for validating and improving cloud detection globally. They also help us understand the strengths and limitations of the various cloud retrievals which use either active and passive satellite sensors. In this paper, individual comparisons will be presented for different types of clouds over various surfaces, including daytime and nighttime, and polar and non-polar regions. Additionally, the statistics of the global, regional, and zonal cloud occurrence and amount from the CERES Ed4, AVHRR cloud masks and CALIPSO VFM will be discussed.
A Semantic Approach to Automate Service Management in the Cloud
2011-06-01
PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) University of Maryland Baltimore County...NAME(S) AND ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved...typically include a large human ele - ment. A key barrier preventing organizations from successfully managing services on the cloud is the lack of an
Combining Passive Microwave Rain Rate Retrieval with Visible and Infrared Cloud Classification.
NASA Astrophysics Data System (ADS)
Miller, Shawn William
The relation between cloud type and rain rate has been investigated here from different approaches. Previous studies and intercomparisons have indicated that no single passive microwave rain rate algorithm is an optimal choice for all types of precipitating systems. Motivated by the upcoming Tropical Rainfall Measuring Mission (TRMM), an algorithm which combines visible and infrared cloud classification with passive microwave rain rate estimation was developed and analyzed in a preliminary manner using data from the Tropical Ocean Global Atmosphere-Coupled Ocean Atmosphere Response Experiment (TOGA-COARE). Overall correlation with radar rain rate measurements across five case studies showed substantial improvement in the combined algorithm approach when compared to the use of any single microwave algorithm. An automated neural network cloud classifier for use over both land and ocean was independently developed and tested on Advanced Very High Resolution Radiometer (AVHRR) data. The global classifier achieved strict accuracy for 82% of the test samples, while a more localized version achieved strict accuracy for 89% of its own test set. These numbers provide hope for the eventual development of a global automated cloud classifier for use throughout the tropics and the temperate zones. The localized classifier was used in conjunction with gridded 15-minute averaged radar rain rates at 8km resolution produced from the current operational network of National Weather Service (NWS) radars, to investigate the relation between cloud type and rain rate over three regions of the continental United States and adjacent waters. The results indicate a substantially lower amount of available moisture in the Front Range of the Rocky Mountains than in the Midwest or in the eastern Gulf of Mexico.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.
2014-12-01
Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud occurrence more than what is shown in Fig. 1c. These graphs show the capability of the camera to detect and measure the cloud occurrence at nighttime. Continuous collection of nighttime pictures is currently implemented. In regions where there is a dearth of scientific data, the measured nighttime cloud occurrence will serve as a baseline for future cloud studies in this part of the world.
Spatially Varying Spectrally Thresholds for MODIS Cloud Detection
NASA Technical Reports Server (NTRS)
Haines, S. L.; Jedlovec, G. J.; Lafontaine, F.
2004-01-01
The EOS science team has developed an elaborate global MODIS cloud detection procedure, and the resulting MODIS product (MOD35) is used in the retrieval process of several geophysical parameters to mask out clouds. While the global application of the cloud detection approach appears quite robust, the product has some shortcomings on the regional scale, often over determining clouds in a variety of settings, particularly at night. This over-determination of clouds can cause a reduction in the spatial coverage of MODIS derived clear-sky products. To minimize this problem, a new regional cloud detection method for use with MODIS data has been developed at NASA's Global Hydrology and Climate Center (GHCC). The approach is similar to that used by the GHCC for GOES data over the continental United States. Several spatially varying thresholds are applied to MODIS spectral data to produce a set of tests for detecting clouds. The thresholds are valid for each MODIS orbital pass, and are derived from 20-day composites of GOES channels with similar wavelengths to MODIS. This paper and accompanying poster will introduce the GHCC MODIS cloud mask, provide some examples, and present some preliminary validation.
Reviews on Security Issues and Challenges in Cloud Computing
NASA Astrophysics Data System (ADS)
An, Y. Z.; Zaaba, Z. F.; Samsudin, N. F.
2016-11-01
Cloud computing is an Internet-based computing service provided by the third party allowing share of resources and data among devices. It is widely used in many organizations nowadays and becoming more popular because it changes the way of how the Information Technology (IT) of an organization is organized and managed. It provides lots of benefits such as simplicity and lower costs, almost unlimited storage, least maintenance, easy utilization, backup and recovery, continuous availability, quality of service, automated software integration, scalability, flexibility and reliability, easy access to information, elasticity, quick deployment and lower barrier to entry. While there is increasing use of cloud computing service in this new era, the security issues of the cloud computing become a challenges. Cloud computing must be safe and secure enough to ensure the privacy of the users. This paper firstly lists out the architecture of the cloud computing, then discuss the most common security issues of using cloud and some solutions to the security issues since security is one of the most critical aspect in cloud computing due to the sensitivity of user's data.
Multilayer Cloud Detection with the MODIS Near-Infrared Water Vapor Absorption Band
NASA Technical Reports Server (NTRS)
Wind, Galina; Platnick, Steven; King, Michael D.; Hubanks, Paul A,; Pavolonis, Michael J.; Heidinger, Andrew K.; Yang, Ping; Baum, Bryan A.
2009-01-01
Data Collection 5 processing for the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the NASA Earth Observing System EOS Terra and Aqua spacecraft includes an algorithm for detecting multilayered clouds in daytime. The main objective of this algorithm is to detect multilayered cloud scenes, specifically optically thin ice cloud overlying a lower-level water cloud, that presents difficulties for retrieving cloud effective radius using single layer plane-parallel cloud models. The algorithm uses the MODIS 0.94 micron water vapor band along with CO2 bands to obtain two above-cloud precipitable water retrievals, the difference of which, in conjunction with additional tests, provides a map of where multilayered clouds might potentially exist. The presence of a multilayered cloud results in a large difference in retrievals of above-cloud properties between the CO2 and the 0.94 micron methods. In this paper the MODIS multilayered cloud algorithm is described, results of using the algorithm over example scenes are shown, and global statistics for multilayered clouds as observed by MODIS are discussed. A theoretical study of the algorithm behavior for simulated multilayered clouds is also given. Results are compared to two other comparable passive imager methods. A set of standard cloudy atmospheric profiles developed during the course of this investigation is also presented. The results lead to the conclusion that the MODIS multilayer cloud detection algorithm has some skill in identifying multilayered clouds with different thermodynamic phases
Dynamic electronic institutions in agent oriented cloud robotic systems.
Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice
2015-01-01
The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iverson, Aaron
Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins
Detecting Super-Thin Clouds With Polarized Light
NASA Technical Reports Server (NTRS)
Sun, Wenbo; Videen, Gorden; Mishchenko, Michael I.
2014-01-01
We report a novel method for detecting cloud particles in the atmosphere. Solar radiation backscattered from clouds is studied with both satellite data and a radiative transfer model. A distinct feature is found in the angle of linear polarization of solar radiation that is backscattered from clouds. The dominant backscattered electric field from the clear-sky Earth-atmosphere system is nearly parallel to the Earth surface. However, when clouds are present, this electric field can rotate significantly away from the parallel direction. Model results demonstrate that this polarization feature can be used to detect super-thin cirrus clouds having an optical depth of only 0.06 and super-thin liquid water clouds having an optical depth of only 0.01. Such clouds are too thin to be sensed using any current passive satellite instruments.
Detecting Super-Thin Clouds with Polarized Sunlight
NASA Technical Reports Server (NTRS)
Sun, Wenbo; Videen, Gorden; Mishchenko, Michael I.
2014-01-01
We report a novel method for detecting cloud particles in the atmosphere. Solar radiation backscattered from clouds is studied with both satellite data and a radiative transfer model. A distinct feature is found in the angle of linear polarization of solar radiation that is backscattered from clouds. The dominant backscattered electric field from the clear-sky Earth-atmosphere system is nearly parallel to the Earth surface. However, when clouds are present, this electric field can rotate significantly away from the parallel direction. Model results demonstrate that this polarization feature can be used to detect super-thin cirrus clouds having an optical depth of only 0.06 and super-thin liquid water clouds having an optical depth of only 0.01. Such clouds are too thin to be sensed using any current passive satellite instruments.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Lagrosas, N.
2017-12-01
Cloud detection nowadays is primarily achieved by the utilization of various sensors aboard satellites. These include MODIS Aqua, MODIS Terra, and AIRS with products that include nighttime cloud fraction. Ground-based instruments are, however, only secondary to these satellites when it comes to cloud detection. Nonetheless, these ground-based instruments (e.g., LIDARs, ceilometers, and sky-cameras) offer significant datasets about a particular region's cloud cover values. For nighttime operations of cloud detection instruments, satellite-based instruments are more reliably and prominently used than ground-based ones. Therefore if a ground-based instrument for nighttime operations is operated, it ought to produce reliable scientific datasets. The objective of this study is to do a comparison between the results of a nighttime ground-based instrument (sky-camera) and that of MODIS Aqua and MODIS Terra. A Canon Powershot A2300 is placed ontop of Manila Observatory (14.64N, 121.07E) and is configured to take images of the night sky at 5min intervals. To detect pixels with clouds, the pictures are converted to grayscale format. Thresholding technique is used to screen pixels with cloud and pixels without clouds. If the pixel value is greater than 17, it is considered as a cloud; otherwise, a noncloud (Gacal et al., 2016). This algorithm is applied to the data gathered from Oct 2015 to Oct 2016. A scatter plot between satellite cloud fraction in the area covering the area 14.2877N, 120.9869E, 14.7711N and 121.4539E and ground cloud cover is graphed to find the monthly correlation. During wet season (June - November), the satellite nighttime cloud fraction vs ground measured cloud cover produce an acceptable R2 (Aqua= 0.74, Terra= 0.71, AIRS= 0.76). However, during dry season, poor R2 values are obtained (AIRS= 0.39, Aqua & Terra = 0.01). The high correlation during wet season can be attributed to a high probability that the camera and satellite see the same clouds. However during dry season, the satellite sees high altitude clouds and the camera can not detect these clouds from the ground as it relies on city lights reflected from low level clouds. With this acknowledged disparity, the ground-based camera has the advantage of detecting haze and thin clouds near the ground that are hardly or not detected by the satellites.
Molecular clouds without detectable CO
NASA Technical Reports Server (NTRS)
Blitz, Leo; Bazell, David; Desert, F. Xavier
1990-01-01
The clouds identified by Desert, Bazell, and Boulanger (DBB clouds) in their search for high-latitude molecular clouds were observed in the CO (J = 1-0) line, but only 13 percent of the sample was detected. The remaining 87 percent are diffuse molecular clouds with CO abundances of about 10 to the -6th, a typical value for diffuse clouds. This hypothesis is shown to be consistent with Copernicus data. The DBB clouds are shown to ben an essentially complete catalog of diffuse molecular clouds in the solar vicinity. The total molecular surface density in the vicinity of the sun is then only about 20 percent greater than the 1.3 solar masses/sq pc determined by Dame et al. (1987). Analysis of the CO detections indicates that there is a sharp threshold in extinction of 0.25 mag before CO is detectable and is derived from the IRAS I(100) micron threshold of 4 MJy/sr. This threshold is presumably where the CO abundance exhibits a sharp increase
Molecular clouds without detectable CO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blitz, L.; Bazell, D.; Desert, F.X.
1990-03-01
The clouds identified by Desert, Bazell, and Boulanger (DBB clouds) in their search for high-latitude molecular clouds were observed in the CO (J = 1-0) line, but only 13 percent of the sample was detected. The remaining 87 percent are diffuse molecular clouds with CO abundances of about 10 to the -6th, a typical value for diffuse clouds. This hypothesis is shown to be consistent with Copernicus data. The DBB clouds are shown to be an essentially complete catalog of diffuse molecular clouds in the solar vicinity. The total molecular surface density in the vicinity of the sun is thenmore » only about 20 percent greater than the 1.3 solar masses/sq pc determined by Dame et al. (1987). Analysis of the CO detections indicates that there is a sharp threshold in extinction of 0.25 mag before CO is detectable and is derived from the IRAS I(100) micron threshold of 4 MJy/sr. This threshold is presumably where the CO abundance exhibits a sharp increase 18 refs.« less
NASA Astrophysics Data System (ADS)
Shea, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Zelinka, M. D.
2016-12-01
Detecting trends in climate variables on global, decadal scales requires highly accurate, stable measurements and retrieval algorithms. Trend uncertainty depends on its magnitude, natural variability, and instrument and retrieval algorithm accuracy and stability. We applied a climate accuracy framework to quantify the impact of absolute calibration on cloud property trend uncertainty. The cloud properties studied were cloud fraction, effective temperature, optical thickness, and effective radius retrieved using the Clouds and the Earth's Radiant Energy System (CERES) Cloud Property Retrieval System, which uses Moderate-resolution Imaging Spectroradiometer measurements (MODIS). Modeling experiments from the fifth phase of the Climate Model Intercomparison Project (CMIP5) agree that net cloud feedback is likely positive but disagree regarding its magnitude, mainly due to uncertainty in shortwave cloud feedback. With the climate accuracy framework we determined the time to detect trends for instruments with various calibration accuracies. We estimated a relationship between cloud property trend uncertainty, cloud feedback, and Equilibrium Climate Sensitivity and also between effective radius trend uncertainty and aerosol indirect effect trends. The direct relationship between instrument accuracy requirements and climate model output provides the level of instrument absolute accuracy needed to reduce climate model projection uncertainty. Different cloud types have varied radiative impacts on the climate system depending on several attributes, such as their thermodynamic phase, altitude, and optical thickness. Therefore, we also conducted these studies by cloud types for a clearer understanding of instrument accuracy requirements needed to detect changes in their cloud properties. Combining this information with the radiative impact of different cloud types helps to prioritize among requirements for future satellite sensors and understanding the climate detection capabilities of existing sensors.
NASA Astrophysics Data System (ADS)
Kim, H. W.; Yeom, J. M.; Woo, S. H.
2017-12-01
Over the thin cloud region, satellite can simultaneously detect the reflectance from thin clouds and land surface. Since the mixed reflectance is not the exact cloud information, the background surface reflectance should be eliminated to accurately distinguish thin cloud such as cirrus. In the previous research, Kim et al (2017) was developed the cloud masking algorithm using the Geostationary Ocean Color Imager (GOCI), which is one of significant instruments for Communication, Ocean, and Meteorology Satellite (COMS). Although GOCI has 8 spectral channels including visible and near infra-red spectral ranges, the cloud masking has quantitatively reasonable result when comparing with MODIS cloud mask (Collection 6 MYD35). Especially, we noticed that this cloud masking algorithm is more specialized in thin cloud detections through the validation with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data. Because this cloud masking method was concentrated on eliminating background surface effects from the top-of-atmosphere (TOA) reflectance. Applying the difference between TOA reflectance and the bi-directional reflectance distribution function (BRDF) model-based background surface reflectance, cloud areas both thick cloud and thin cloud can be discriminated without infra-red channels which were mostly used for detecting clouds. Moreover, when the cloud mask result was utilized as the input data when simulating BRDF model and the optimized BRDF model-based surface reflectance was used for the optimized cloud masking, the probability of detection (POD) has higher value than POD of the original cloud mask. In this study, we examine the correlation between cloud optical depth (COD) and its cloud mask result. Cloud optical depths mostly depend on the cloud thickness, the characteristic of contents, and the size of cloud contents. COD ranges from less than 0.1 for thin clouds to over 1000 for the huge cumulus due to scattering by droplets. With the cloud optical depth of CALIPSO, the cloud masking result can be more improved since we can figure out how deep cloud is. To validate the cloud mask and the correlation result, the atmospheric retrieval will be computed to compare the difference between TOA reflectance and the simulated surface reflectance.
Ahn, M. H.; Han, D.; Won, H. Y.; ...
2015-02-03
For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less
Atmospheric Science Data Center
2013-04-19
... right is the cloud-top height field derived using automated computer processing of the data from multiple MISR cameras. Relative height ... NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, D.C. The Terra spacecraft is managed ...
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Gala; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.
2008-01-01
CALIPSO and CloudSat, launched in June 2006, provide global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the "Collection 5" stream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 h resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, and CloudSat radar measurements, we investigate the global performance of the thermodynamic phase and multilayer cloud detection algorithms.
Comparative verification between GEM model and official aviation terminal forecasts
NASA Technical Reports Server (NTRS)
Miller, Robert G.
1988-01-01
The Generalized Exponential Markov (GEM) model uses the local standard airways observation (SAO) to predict hour-by-hour the following elements: temperature, pressure, dew point depression, first and second cloud-layer height and amount, ceiling, total cloud amount, visibility, wind, and present weather conditions. GEM is superior to persistence at all projections for all elements in a large independent sample. A minute-by-minute GEM forecasting system utilizing the Automated Weather Observation System (AWOS) is under development.
2007-02-01
determined by its neighbors’ correspondence. Thus, the algorithm consists of four main steps: ICP registration of the base and nipple regions of the...the nipple and the base of the breast, as a location for accurately determining initial correspondence. However, due to the compression, the nipple of...cloud) is translated and lies at a different angle than the nipple of the pendant breast (the source point cloud). By minimizing the average distance
MR-based detection of individual histotripsy bubble clouds formed in tissues and phantoms.
Allen, Steven P; Hernandez-Garcia, Luis; Cain, Charles A; Hall, Timothy L
2016-11-01
To demonstrate that MR sequences can detect individual histotripsy bubble clouds formed inside intact tissues. A line-scan and an EPI sequence were sensitized to histotripsy by inserting a bipolar gradient whose lobes bracketed the lifespan of a histotripsy bubble cloud. Using a 7 Tesla, small-bore scanner, these sequences monitored histotripsy clouds formed in an agar phantom and in vitro porcine liver and brain. The bipolar gradients were adjusted to apply phase with k-space frequencies of 10, 300 or 400 cm -1 . Acoustic pressure amplitude was also varied. Cavitation was simultaneously monitored using a passive cavitation detection system. Each image captured local signal loss specific to an individual bubble cloud. In the agar phantom, this signal loss appeared only when the transducer output exceeded the cavitation threshold pressure. In tissues, bubble clouds were immediately detected when the gradients created phase with k-space frequencies of 300 and 400 cm -1 . When the gradients created phase with a k-space frequency of 10 cm -1 , individual bubble clouds were not detectable until many acoustic pulses had been applied to the tissue. Cavitation-sensitive MR-sequences can detect single histotripsy bubble clouds formed in biologic tissue. Detection is influenced by the sensitizing gradients and treatment history. Magn Reson Med 76:1486-1493, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Maslanik, J. A.; Key, J. R.
1987-01-01
A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.
NASA Astrophysics Data System (ADS)
Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.
2016-12-01
In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.
Cloud Environment Automation: from infrastructure deployment to application monitoring
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.
2017-10-01
The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.
12348_GLOBE_Observer_App_Promo
2016-08-25
GLOBE Observer invites you to make environmental observations that complement NASA satellite observations to help scientists studying Earth and the global environment. Version 1.1 includes GLOBE Clouds, which allows you to photograph clouds and record sky observations and compare them with NASA satellite images. GLOBE is now the major source of human observations of clouds, which provide more information than automated systems. Future versions of GLOBE Observer will add additional tools for you to use as a citizen environmental scientist. By using the GLOBE Observer app, you are joining the GLOBE community and contributing important scientific data to NASA and GLOBE, your local community, and students and scientists worldwide. New and interested users are encouraged to go to observer.globe.gov to learn more about the GLOBE program, or learn more about the GLOBE Clouds protocol.
Automated cloud screening of AVHRR imagery using split-and-merge clustering
NASA Technical Reports Server (NTRS)
Gallaudet, Timothy C.; Simpson, James J.
1991-01-01
Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.
NASA Astrophysics Data System (ADS)
Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.
2006-12-01
During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.
NASA Astrophysics Data System (ADS)
Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.
2005-05-01
During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.
A cloud masking algorithm for EARLINET lidar systems
NASA Astrophysics Data System (ADS)
Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina
2015-04-01
Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.
Solar Asset Management Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iverson, Aaron; Zviagin, George
Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins.
Enhancing a Simple MODIS Cloud Mask Algorithm for the Landsat Data Continuity Mission
NASA Technical Reports Server (NTRS)
Wilson, Michael J.; Oreopoulos, Lazarous
2011-01-01
The presence of clouds in images acquired by the Landsat series of satellites is usually an undesirable, but generally unavoidable fact. With the emphasis of the program being on land imaging, the suspended liquid/ice particles of which clouds are made of fully or partially obscure the desired observational target. Knowing the amount and location of clouds in a Landsat scene is therefore valuable information for scene selection, for making clear-sky composites from multiple scenes, and for scheduling future acquisitions. The two instruments in the upcoming Landsat Data Continuity Mission (LDCM) will include new channels that will enhance our ability to detect high clouds which are often also thin in the sense that a large fraction of solar radiation can pass through them. This work studies the potential impact of these new channels on enhancing LDCM's cloud detection capabilities compared to previous Landsat missions. We revisit a previously published scheme for cloud detection and add new tests to capture more of the thin clouds that are harder to detect with the more limited arsenal channels. Since there are no Landsat data yet that include the new LDCM channels, we resort to data from another instrument, MODIS, which has these bands, as well as the other bands of LDCM, to test the capabilities of our new algorithm. By comparing our revised scheme's performance against the performance of the official MODIS cloud detection scheme, we conclude that the new scheme performs better than the earlier scheme which was not very good at thin cloud detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudkevich, Aleksandr; Goldis, Evgeniy
This research conducted by the Newton Energy Group, LLC (NEG) is dedicated to the development of pCloud: a Cloud-based Power Market Simulation Environment. pCloud is offering power industry stakeholders the capability to model electricity markets and is organized around the Software as a Service (SaaS) concept -- a software application delivery model in which software is centrally hosted and provided to many users via the internet. During the Phase I of this project NEG developed a prototype design for pCloud as a SaaS-based commercial service offering, system architecture supporting that design, ensured feasibility of key architecture's elements, formed technological partnershipsmore » and negotiated commercial agreements with partners, conducted market research and other related activities and secured funding for continue development of pCloud between the end of Phase I and beginning of Phase II, if awarded. Based on the results of Phase I activities, NEG has established that the development of a cloud-based power market simulation environment within the Windows Azure platform is technologically feasible, can be accomplished within the budget and timeframe available through the Phase II SBIR award with additional external funding. NEG believes that pCloud has the potential to become a game-changing technology for the modeling and analysis of electricity markets. This potential is due to the following critical advantages of pCloud over its competition: - Standardized access to advanced and proven power market simulators offered by third parties. - Automated parallelization of simulations and dynamic provisioning of computing resources on the cloud. This combination of automation and scalability dramatically reduces turn-around time while offering the capability to increase the number of analyzed scenarios by a factor of 10, 100 or even 1000. - Access to ready-to-use data and to cloud-based resources leading to a reduction in software, hardware, and IT costs. - Competitive pricing structure, which will make high-volume usage of simulation services affordable. - Availability and affordability of high quality power simulators, which presently only large corporate clients can afford, will level the playing field in developing regional energy policies, determining prudent cost recovery mechanisms and assuring just and reasonable rates to consumers. - Users that presently do not have the resources to internally maintain modeling capabilities will now be able to run simulations. This will invite more players into the industry, ultimately leading to more transparent and liquid power markets.« less
Generic-distributed framework for cloud services marketplace based on unified ontology.
Hasan, Samer; Valli Kumari, V
2017-11-01
Cloud computing is a pattern for delivering ubiquitous and on demand computing resources based on pay-as-you-use financial model. Typically, cloud providers advertise cloud service descriptions in various formats on the Internet. On the other hand, cloud consumers use available search engines (Google and Yahoo) to explore cloud service descriptions and find the adequate service. Unfortunately, general purpose search engines are not designed to provide a small and complete set of results, which makes the process a big challenge. This paper presents a generic-distrusted framework for cloud services marketplace to automate cloud services discovery and selection process, and remove the barriers between service providers and consumers. Additionally, this work implements two instances of generic framework by adopting two different matching algorithms; namely dominant and recessive attributes algorithm borrowed from gene science and semantic similarity algorithm based on unified cloud service ontology. Finally, this paper presents unified cloud services ontology and models the real-life cloud services according to the proposed ontology. To the best of the authors' knowledge, this is the first attempt to build a cloud services marketplace where cloud providers and cloud consumers can trend cloud services as utilities. In comparison with existing work, semantic approach reduced the execution time by 20% and maintained the same values for all other parameters. On the other hand, dominant and recessive attributes approach reduced the execution time by 57% but showed lower value for recall.
Cloud-Based Automated Design and Additive Manufacturing: A Usage Data-Enabled Paradigm Shift
Lehmhus, Dirk; Wuest, Thorsten; Wellsandt, Stefan; Bosse, Stefan; Kaihara, Toshiya; Thoben, Klaus-Dieter; Busse, Matthias
2015-01-01
Integration of sensors into various kinds of products and machines provides access to in-depth usage information as basis for product optimization. Presently, this large potential for more user-friendly and efficient products is not being realized because (a) sensor integration and thus usage information is not available on a large scale and (b) product optimization requires considerable efforts in terms of manpower and adaptation of production equipment. However, with the advent of cloud-based services and highly flexible additive manufacturing techniques, these obstacles are currently crumbling away at rapid pace. The present study explores the state of the art in gathering and evaluating product usage and life cycle data, additive manufacturing and sensor integration, automated design and cloud-based services in manufacturing. By joining and extrapolating development trends in these areas, it delimits the foundations of a manufacturing concept that will allow continuous and economically viable product optimization on a general, user group or individual user level. This projection is checked against three different application scenarios, each of which stresses different aspects of the underlying holistic concept. The following discussion identifies critical issues and research needs by adopting the relevant stakeholder perspectives. PMID:26703606
Cloud-Based Automated Design and Additive Manufacturing: A Usage Data-Enabled Paradigm Shift.
Lehmhus, Dirk; Wuest, Thorsten; Wellsandt, Stefan; Bosse, Stefan; Kaihara, Toshiya; Thoben, Klaus-Dieter; Busse, Matthias
2015-12-19
Integration of sensors into various kinds of products and machines provides access to in-depth usage information as basis for product optimization. Presently, this large potential for more user-friendly and efficient products is not being realized because (a) sensor integration and thus usage information is not available on a large scale and (b) product optimization requires considerable efforts in terms of manpower and adaptation of production equipment. However, with the advent of cloud-based services and highly flexible additive manufacturing techniques, these obstacles are currently crumbling away at rapid pace. The present study explores the state of the art in gathering and evaluating product usage and life cycle data, additive manufacturing and sensor integration, automated design and cloud-based services in manufacturing. By joining and extrapolating development trends in these areas, it delimits the foundations of a manufacturing concept that will allow continuous and economically viable product optimization on a general, user group or individual user level. This projection is checked against three different application scenarios, each of which stresses different aspects of the underlying holistic concept. The following discussion identifies critical issues and research needs by adopting the relevant stakeholder perspectives.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Templet Web: the use of volunteer computing approach in PaaS-style cloud
NASA Astrophysics Data System (ADS)
Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil
2018-03-01
This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.
An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.
2015-01-01
The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.
An automated cirrus classification
NASA Astrophysics Data System (ADS)
Gryspeerdt, Edward; Quaas, Johannes; Goren, Tom; Klocke, Daniel; Brueck, Matthias
2018-05-01
Cirrus clouds play an important role in determining the radiation budget of the earth, but many of their properties remain uncertain, particularly their response to aerosol variations and to warming. Part of the reason for this uncertainty is the dependence of cirrus cloud properties on the cloud formation mechanism, which itself is strongly dependent on the local meteorological conditions. In this work, a classification system (Identification and Classification of Cirrus or IC-CIR) is introduced to identify cirrus clouds by the cloud formation mechanism. Using reanalysis and satellite data, cirrus clouds are separated into four main types: orographic, frontal, convective and synoptic. Through a comparison to convection-permitting model simulations and back-trajectory-based analysis, it is shown that these observation-based regimes can provide extra information on the cloud-scale updraughts and the frequency of occurrence of liquid-origin ice, with the convective regime having higher updraughts and a greater occurrence of liquid-origin ice compared to the synoptic regimes. Despite having different cloud formation mechanisms, the radiative properties of the regimes are not distinct, indicating that retrieved cloud properties alone are insufficient to completely describe them. This classification is designed to be easily implemented in GCMs, helping improve future model-observation comparisons and leading to improved parametrisations of cirrus cloud processes.
Applied Meteorology Unit (AMU) Quarterly Report. First Quarter FY-05
NASA Technical Reports Server (NTRS)
Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David
2005-01-01
This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2005 (October - December 2005). Tasks reviewed include: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Stable Low Cloud Evaluation, (5) Shuttle Ascent Camera Cloud Obstruction Forecast, (6) Range Standardization and Automation (RSA) and Legacy Wind Sensor Evaluation, (7) Advanced Regional Prediction System (ARPS) Optimization and Training Extension, and (8) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest
Phenomenology tools on cloud infrastructures using OpenStack
NASA Astrophysics Data System (ADS)
Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.
2013-04-01
We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.
Improvements to GOES Twilight Cloud Detection over the ARM SGP
NASA Technical Reports Server (NTRS)
Yost, c. R.; Trepte, Q.; Khaiyer, M. M.; Palikonda, R.; Nguyen, L.
2007-01-01
The current ARM satellite cloud products derived from Geostationary Operational Environmental Satellite (GOES) data provide continuous coverage of many cloud properties over the ARM Southern Great Plains domain. However, discontinuities occur during daylight near the terminator, a time period referred to here as twilight. This poster presentation will demonstrate the improvements in cloud detection provided by the improved cloud mask algorithm as well as validation of retrieved cloud properties using surface observations from the Atmospheric Radiation Measurement Southern Great Plains (ARM SGP) site.
Rovira, Ericka; Parasuraman, Raja
2010-06-01
This study examined whether benefits of conflict probe automation would occur in a future air traffic scenario in which air traffic service providers (ATSPs) are not directly responsible for freely maneuvering aircraft but are controlling other nonequipped aircraft (mixed-equipage environment). The objective was to examine how the type of automation imperfection (miss vs. false alarm) affects ATSP performance and attention allocation. Research has shown that the type of automation imperfection leads to differential human performance costs. Participating in four 30-min scenarios were 12 full-performance-level ATSPs. Dependent variables included conflict detection and resolution performance, eye movements, and subjective ratings of trust and self confidence. ATSPs detected conflicts faster and more accurately with reliable automation, as compared with manual performance. When the conflict probe automation was unreliable, conflict detection performance declined with both miss (25% conflicts detected) and false alarm automation (50% conflicts detected). When the primary task of conflict detection was automated, even highly reliable yet imperfect automation (miss or false alarm) resulted in serious negative effects on operator performance. The further in advance that conflict probe automation predicts a conflict, the greater the uncertainty of prediction; thus, designers should provide users with feedback on the state of the automation or other tools that allow for inspection and analysis of the data underlying the conflict probe algorithm.
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.
2010-01-01
The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with ocean, particularly in the tropics and summer hemisphere.
Improved Thin Cirrus and Terminator Cloud Detection in CERES Cloud Mask
NASA Technical Reports Server (NTRS)
Trepte, Qing; Minnis, Patrick; Palikonda, Rabindra; Spangenberg, Doug; Haeffelin, Martial
2006-01-01
Thin cirrus clouds account for about 20-30% of the total cloud coverage and affect the global radiation budget by increasing the Earth's albedo and reducing infrared emissions. Thin cirrus, however, are often underestimated by traditional satellite cloud detection algorithms. This difficulty is caused by the lack of spectral contrast between optically thin cirrus and the surface in techniques that use visible (0.65 micron ) and infrared (11 micron ) channels. In the Clouds and the Earth s Radiant Energy System (CERES) Aqua Edition 1 (AEd1) and Terra Edition 3 (TEd3) Cloud Masks, thin cirrus detection is significantly improved over both land and ocean using a technique that combines MODIS high-resolution measurements from the 1.38 and 11 micron channels and brightness temperature differences (BTDs) of 11-12, 8.5-11, and 3.7-11 micron channels. To account for humidity and view angle dependencies, empirical relationships were derived with observations from the 1.38 micron reflectance and the 11-12 and 8.5-11 micron BTDs using 70 granules of MODIS data in 2002 and 2003. Another challenge in global cloud detection algorithms occurs near the day/night terminator where information from the visible 0.65 micron channel and the estimated solar component of 3.7 micron channel becomes less reliable. As a result, clouds are often underestimated or misidentified near the terminator over land and ocean. Comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer [AVHRR]) cloud coverage and Geoscience Laser Altimeter System (GLAS) measurements north of 60 N indicate significant amounts of missing clouds from CLAVR-x because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products (MOD06) and GLAS in the same region also show similar difficulties with MODIS cloud retrievals. The consistent detection of clouds through out the day is needed to provide reliable cloud and radiation products for CERES and other research efforts involving the modeling of clouds and their interaction with the radiation budget.
The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy.
Fleming, Alan D; Goatman, Keith A; Philip, Sam; Williams, Graeme J; Prescott, Gordon J; Scotland, Graham S; McNamee, Paul; Leese, Graham P; Wykes, William N; Sharp, Peter F; Olson, John A
2010-06-01
Automated grading has the potential to improve the efficiency of diabetic retinopathy screening services. While disease/no disease grading can be performed using only microaneurysm detection and image-quality assessment, automated recognition of other types of lesions may be advantageous. This study investigated whether inclusion of automated recognition of exudates and haemorrhages improves the detection of observable/referable diabetic retinopathy. Images from 1253 patients with observable/referable retinopathy and 6333 patients with non-referable retinopathy were obtained from three grading centres. All images were reference-graded, and automated disease/no disease assessments were made based on microaneurysm detection and combined microaneurysm, exudate and haemorrhage detection. Introduction of algorithms for exudates and haemorrhages resulted in a statistically significant increase in the sensitivity for detection of observable/referable retinopathy from 94.9% (95% CI 93.5 to 96.0) to 96.6% (95.4 to 97.4) without affecting manual grading workload. Automated detection of exudates and haemorrhages improved the detection of observable/referable retinopathy.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
Hättenschwiler, Nicole; Sterchi, Yanik; Mendes, Marcia; Schwaninger, Adrian
2018-10-01
Bomb attacks on civil aviation make detecting improvised explosive devices and explosive material in passenger baggage a major concern. In the last few years, explosive detection systems for cabin baggage screening (EDSCB) have become available. Although used by a number of airports, most countries have not yet implemented these systems on a wide scale. We investigated the benefits of EDSCB with two different levels of automation currently being discussed by regulators and airport operators: automation as a diagnostic aid with an on-screen alarm resolution by the airport security officer (screener) or EDSCB with an automated decision by the machine. The two experiments reported here tested and compared both scenarios and a condition without automation as baseline. Participants were screeners at two international airports who differed in both years of work experience and familiarity with automation aids. Results showed that experienced screeners were good at detecting improvised explosive devices even without EDSCB. EDSCB increased only their detection of bare explosives. In contrast, screeners with less experience (tenure < 1 year) benefitted substantially from EDSCB in detecting both improvised explosive devices and bare explosives. A comparison of all three conditions showed that automated decision provided better human-machine detection performance than on-screen alarm resolution and no automation. This came at the cost of slightly higher false alarm rates on the human-machine system level, which would still be acceptable from an operational point of view. Results indicate that a wide-scale implementation of EDSCB would increase the detection of explosives in passenger bags and automated decision instead of automation as diagnostic aid with on screen alarm resolution should be considered. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Bedka, Kristopher M.; Dworak, Richard; Brunner, Jason; Feltz, Wayne
2012-01-01
Two satellite infrared-based overshooting convective cloud-top (OT) detection methods have recently been described in the literature: 1) the 11-mm infrared window channel texture (IRW texture) method, which uses IRW channel brightness temperature (BT) spatial gradients and thresholds, and 2) the water vapor minus IRW BT difference (WV-IRW BTD). While both methods show good performance in published case study examples, it is important to quantitatively validate these methods relative to overshooting top events across the globe. Unfortunately, no overshooting top database currently exists that could be used in such study. This study examines National Aeronautics and Space Administration CloudSat Cloud Profiling Radar data to develop an OT detection validation database that is used to evaluate the IRW-texture and WV-IRW BTD OT detection methods. CloudSat data were manually examined over a 1.5-yr period to identify cases in which the cloud top penetrates above the tropopause height defined by a numerical weather prediction model and the surrounding cirrus anvil cloud top, producing 111 confirmed overshooting top events. When applied to Moderate Resolution Imaging Spectroradiometer (MODIS)-based Geostationary Operational Environmental Satellite-R Series (GOES-R) Advanced Baseline Imager proxy data, the IRW-texture (WV-IRW BTD) method offered a 76% (96%) probability of OT detection (POD) and 16% (81%) false-alarm ratio. Case study examples show that WV-IRW BTD.0 K identifies much of the deep convective cloud top, while the IRW-texture method focuses only on regions with a spatial scale near that of commonly observed OTs. The POD decreases by 20% when IRW-texture is applied to current geostationary imager data, highlighting the importance of imager spatial resolution for observing and detecting OT regions.
Expedition 17 Automated Transfer Vehicle (ATV) Undocking
2008-09-05
ISS017-E-015496 (5 Sept. 2008) --- Backdropped by a blanket of clouds, European Space Agency's (ESA) "Jules Verne" Automated Transfer Vehicle (ATV) continues its relative separation from the International Space Station. The ATV undocked from the aft port of the Zvezda Service Module at 4:29 p.m. (CDT) on Sept. 5, 2008 and was placed in a parking orbit for three weeks, scheduled to be deorbited on Sept. 29 when lighting conditions are correct for an ESA imagery experiment of reentry.
Final Report Ra Power Management 1255 10-15-16 FINAL_Public
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iverson, Aaron
Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins
Cloud-based MOTIFSIM: Detecting Similarity in Large DNA Motif Data Sets.
Tran, Ngoc Tam L; Huang, Chun-Hsi
2017-05-01
We developed the cloud-based MOTIFSIM on Amazon Web Services (AWS) cloud. The tool is an extended version from our web-based tool version 2.0, which was developed based on a novel algorithm for detecting similarity in multiple DNA motif data sets. This cloud-based version further allows researchers to exploit the computing resources available from AWS to detect similarity in multiple large-scale DNA motif data sets resulting from the next-generation sequencing technology. The tool is highly scalable with expandable AWS.
LIDAR Developments at Clermont-Ferrand—France for Atmospheric Observation
Fréville, Patrick; Montoux, Nadège; Baray, Jean-Luc; Chauvigné, Aurélien; Réveret, François; Hervo, Maxime; Dionisi, Davide; Payen, Guillaume; Sellegri, Karine
2015-01-01
We present a Rayleigh-Mie-Raman LIDAR system in operation at Clermont-Ferrand (France) since 2008. The system provides continuous vertical tropospheric profiles of aerosols, cirrus optical properties and water vapour mixing ratio. Located in proximity to the high altitude Puy de Dôme station, labelled as the GAW global station PUY since August 2014, it is a useful tool to describe the boundary layer dynamics and hence interpret in situ measurements. This LIDAR has been upgraded with specific hardware/software developments and laboratory calibrations in order to improve the quality of the profiles, calibrate the depolarization ratio, and increase the automation of operation. As a result, we provide a climatological water vapour profile analysis for the 2009–2013 period, showing an annual cycle with a winter minimum and a summer maximum, consistent with in-situ observations at the PUY station. An overview of a preliminary climatology of cirrus clouds frequency shows that in 2014, more than 30% of days present cirrus events. Finally, the backscatter coefficient profile observed on 27 September 2014 shows the capacity of the system to detect cirrus clouds at 13 km altitude, in presence of aerosols below the 5 km altitude. PMID:25643059
Fast-time Simulation of an Automated Conflict Detection and Resolution Concept
NASA Technical Reports Server (NTRS)
Windhorst, Robert; Erzberger, Heinz
2006-01-01
This paper investigates the effect on the National Airspace System of reducing air traffc controller workload by automating conflict detection and resolution. The Airspace Concept Evaluation System is used to perform simulations of the Cleveland Center with conventional and with automated conflict detection and resolution concepts. Results show that the automated conflict detection and resolution concept significantly decreases growth of delay as traffic demand is increased in en-route airspace.
NASA Astrophysics Data System (ADS)
Okyay, U.; Glennie, C. L.; Khan, S.
2017-12-01
Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.
NASA Astrophysics Data System (ADS)
Schwind, Michael
Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
NASA Technical Reports Server (NTRS)
2002-01-01
The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team
Tiny, Dusty, Galactic HI Clouds: The GALFA-HI Compact Cloud Catalog
NASA Astrophysics Data System (ADS)
Saul, Destry R.; Putman, M. E.; Peek, J. G.
2013-01-01
The recently published GALFA-HI Compact Cloud Catalog contains 2000 nearby neutral hydrogen clouds under 20' in angular size detected with a machine-vision algorithm in the Galactic Arecibo L-Band Feed Array HI survey (GALFA-HI). At a distance of 1kpc, the compact clouds would typically be 1 solar mass and 1pc in size. We observe that nearly all of the compact clouds that are classified as high velocity (> 90 km/s) are near previously-identified high velocity complexes. We separate the compact clouds into populations based on velocity, linewidth, and position. We have begun to search for evidence of dust in these clouds using IRIS and have detections in several populations.
DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.
2017-12-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
Optical Algorithm for Cloud Shadow Detection Over Water
2013-02-01
REPORT DATE (DD-MM-YYYY) 05-02-2013 2. REPORT TYPE Journal Article 3. DATES COVERED (From ■ To) 4. TITLE AND SUBTITLE Optical Algorithm for Cloud...particularly over humid tropical regions. Throughout the year, about two-thirds of the Earth’s surface is always covered by clouds [1]. The problem...V. Khlopenkov and A. P. Trishchenko, "SPARC: New cloud, snow , cloud shadow detection scheme for historical I-km AVHHR data over Canada," / Atmos
NASA Astrophysics Data System (ADS)
Trepte, Q.; Minnis, P.; Palikonda, R.; Yost, C. R.; Rodier, S. D.; Trepte, C. R.; McGill, M. J.
2016-12-01
Geostationary satellites provide continuous cloud and meteorological observations important for weather forecasting and for understanding climate processes. The Himawari-8 satellite represents a new generation of measurement capabilities with significantly improved resolution and enhanced spectral information. The satellite was launched in October 2014 by the Japanese Meteorological Agency and is centered at 140° E to provide coverage over eastern Asia and the western Pacific region. A cloud detection algorithm was developed as part of the CERES Cloud Mask algorithm using the Advanced Himawari Imager (AHI), a 16 channel multi-spectral imager. The algorithm was originally designed for use with Meteosat Second Generation (MSG) data and has been adapted for Himawari-8 AHI measurements. This paper will describe the improvements in the Himawari cloud mask including daytime ocean low cloud and aerosol discrimination, nighttime thin cirrus detection, and Australian desert and coastal cloud detection. The statistics from matched CERES Himawari cloud mask results with CALIPSO lidar data and with new observations from the CATS lidar will also be presented. A feature of the CATS instrument on board the International Space Station is that it gives information at different solar viewing times to examine the diurnal variation of clouds and this provides an ability to evaluate the performance of the cloud mask for different sun angles.
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
Remote Patient Management in Automated Peritoneal Dialysis: A Promising New Tool.
Drepper, Valérie Jotterand; Martin, Pierre-Yves; Chopard, Catherine Stoermann; Sloand, James A
2018-01-01
Remote patient management (RPM) has the potential to help clinicians detect early issues, allowing intervention prior to development of more significant problems. A 23-year-old end-stage kidney disease patient required urgent start of renal replacement therapy. A newly available automated peritoneal dialysis (APD) RPM system with cloud-based connectivity was implemented in her care. Pre-defined RPM threshold parameters were set to identify clinically relevant issues. Red flag dashboard alerts heralded prolonged drain times leading to clinical evaluation with subsequent diagnosis of and surgical repositioning for catheter displacement, although it took several days for newly-RPM-exposed staff to recognize this issue. Post-PD catheter repositioning, drain times were again normal as indicated by disappearance of flag alerts and unremarkable cycle volume profiles. Identification of < 90% adherence to prescribed PD therapy was then documented with the RPM system, alerting the clinical staff to address this important issue given its association with significant negative clinical outcomes. Healthcare providers face a "learning curve" to effect optimal utilization of the RPM tool. Larger scale observational studies will determine the impact of RPM on APD technique survival and resource utilization. Copyright © 2018 International Society for Peritoneal Dialysis.
A holistic image segmentation framework for cloud detection and extraction
NASA Astrophysics Data System (ADS)
Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe
2013-05-01
Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.
NASA Astrophysics Data System (ADS)
Khazaeli, S.; Ravandi, A. G.; Banerji, S.; Bagchi, A.
2016-04-01
Recently, data-driven models for Structural Health Monitoring (SHM) have been of great interest among many researchers. In data-driven models, the sensed data are processed to determine the structural performance and evaluate the damages of an instrumented structure without necessitating the mathematical modeling of the structure. A framework of data-driven models for online assessment of the condition of a structure has been developed here. The developed framework is intended for automated evaluation of the monitoring data and structural performance by the Internet technology and resources. The main challenges in developing such framework include: (a) utilizing the sensor measurements to estimate and localize the induced damage in a structure by means of signal processing and data mining techniques, and (b) optimizing the computing and storage resources with the aid of cloud services. The main focus in this paper is to demonstrate the efficiency of the proposed framework for real-time damage detection of a multi-story shear-building structure in two damage scenarios (change in mass and stiffness) in various locations. Several features are extracted from the sensed data by signal processing techniques and statistical methods. Machine learning algorithms are deployed to select damage-sensitive features as well as classifying the data to trace the anomaly in the response of the structure. Here, the cloud computing resources from Amazon Web Services (AWS) have been used to implement the proposed framework.
N, Sadhasivam; R, Balamurugan; M, Pandi
2018-01-27
Objective: Epigenetic modifications involving DNA methylation and histone statud are responsible for the stable maintenance of cellular phenotypes. Abnormalities may be causally involved in cancer development and therefore could have diagnostic potential. The field of epigenomics refers to all epigenetic modifications implicated in control of gene expression, with a focus on better understanding of human biology in both normal and pathological states. Epigenomics scientific workflow is essentially a data processing pipeline to automate the execution of various genome sequencing operations or tasks. Cloud platform is a popular computing platform for deploying large scale epigenomics scientific workflow. Its dynamic environment provides various resources to scientific users on a pay-per-use billing model. Scheduling epigenomics scientific workflow tasks is a complicated problem in cloud platform. We here focused on application of an improved particle swam optimization (IPSO) algorithm for this purpose. Methods: The IPSO algorithm was applied to find suitable resources and allocate epigenomics tasks so that the total cost was minimized for detection of epigenetic abnormalities of potential application for cancer diagnosis. Result: The results showed that IPSO based task to resource mapping reduced total cost by 6.83 percent as compared to the traditional PSO algorithm. Conclusion: The results for various cancer diagnosis tasks showed that IPSO based task to resource mapping can achieve better costs when compared to PSO based mapping for epigenomics scientific application workflow. Creative Commons Attribution License
e-Collaboration for Earth observation (E-CEO): the Cloud4SAR interferometry data challenge
NASA Astrophysics Data System (ADS)
Casu, Francesco; Manunta, Michele; Boissier, Enguerran; Brito, Fabrice; Aas, Christina; Lavender, Samantha; Ribeiro, Rita; Farres, Jordi
2014-05-01
The e-Collaboration for Earth Observation (E-CEO) project addresses the technologies and architectures needed to provide a collaborative research Platform for automating data mining and processing, and information extraction experiments. The Platform serves for the implementation of Data Challenge Contests focusing on Information Extraction for Earth Observations (EO) applications. The possibility to implement multiple processors within a Common Software Environment facilitates the validation, evaluation and transparent peer comparison among different methodologies, which is one of the main requirements rose by scientists who develop algorithms in the EO field. In this scenario, we set up a Data Challenge, referred to as Cloud4SAR (http://wiki.services.eoportal.org/tiki-index.php?page=ECEO), to foster the deployment of Interferometric SAR (InSAR) processing chains within a Cloud Computing platform. While a large variety of InSAR processing software tools are available, they require a high level of expertise and a complex user interaction to be effectively run. Computing a co-seismic interferogram or a 20-years deformation time series on a volcanic area are not easy tasks to be performed in a fully unsupervised way and/or in very short time (hours or less). Benefiting from ESA's E-CEO platform, participants can optimise algorithms on a Virtual Sandbox environment without being expert programmers, and compute results on high performing Cloud platforms. Cloud4SAR requires solving a relatively easy InSAR problem by trying to maximize the exploitation of the processing capabilities provided by a Cloud Computing infrastructure. The proposed challenge offers two different frameworks, each dedicated to participants with different skills, identified as Beginners and Experts. For both of them, the contest mainly resides in the degree of automation of the deployed algorithms, no matter which one is used, as well as in the capability of taking effective benefit from a parallel computing environment.
Automated Detection of Sepsis Using Electronic Medical Record Data: A Systematic Review.
Despins, Laurel A
Severe sepsis and septic shock are global issues with high mortality rates. Early recognition and intervention are essential to optimize patient outcomes. Automated detection using electronic medical record (EMR) data can assist this process. This review describes automated sepsis detection using EMR data. PubMed retrieved publications between January 1, 2005 and January 31, 2015. Thirteen studies met study criteria: described an automated detection approach with the potential to detect sepsis or sepsis-related deterioration in real or near-real time; focused on emergency department and hospitalized neonatal, pediatric, or adult patients; and provided performance measures or results indicating the impact of automated sepsis detection. Detection algorithms incorporated systemic inflammatory response and organ dysfunction criteria. Systems in nine studies generated study or care team alerts. Care team alerts did not consistently lead to earlier interventions. Earlier interventions did not consistently translate to improved patient outcomes. Performance measures were inconsistent. Automated sepsis detection is potentially a means to enable early sepsis-related therapy but current performance variability highlights the need for further research.
Analysis of cloud top height and cloud coverage from satellites using the O2 A and B bands
NASA Technical Reports Server (NTRS)
Kuze, Akihiko; Chance, Kelly V.
1994-01-01
Cloud height and cloud coverage detection are important for total ozone retrieval using ultraviolet and visible scattered light. Use of the O2 A and B bands, around 761 and 687 nm, by a satellite-borne instrument of moderately high spectral resolution viewing in the nadir makes it possible to detect cloud top height and related parameters, including fractional coverage. The measured values of a satellite-borne spectrometer are convolutions of the instrument slit function and the atmospheric transmittance between cloud top and satellite. Studies here determine the optical depth between a satellite orbit and the Earth or cloud top height to high accuracy using FASCODE 3. Cloud top height and a cloud coverage parameter are determined by least squares fitting to calculated radiance ratios in the oxygen bands. A grid search method is used to search the parameter space of cloud top height and the coverage parameter to minimize an appropriate sum of squares of deviations. For this search, nonlinearity of the atmospheric transmittance (i.e., leverage based on varying amounts of saturation in the absorption spectrum) is important for distinguishing between cloud top height and fractional coverage. Using the above-mentioned method, an operational cloud detection algorithm which uses minimal computation time can be implemented.
The Cloud Detection and UV Monitoring Experiment (CLUE)
NASA Technical Reports Server (NTRS)
Barbier, L.; Loh, E.; Sokolsky, P.; Streitmatter, R.
2004-01-01
We propose a large-area, low-power instrument to perform CLoud detection and Ultraviolet monitoring, CLUE. CLUE will combine the W detection capabilities of the NIGHTGLOW payload, with an array of infrared sensors to perform cloud slicing measurements. Missions such as EUSO and OWL which seek to measure UHE cosmic-rays at 1W20 eV use the atmosphere as a fluorescence detector. CLUE will provide several important correlated measurements for these missions, including: monitoring the atmospheric W emissions &om 330 - 400 nm, determining the ambient cloud cover during those W measurements (with active LIDAR), measuring the optical depth of the clouds (with an array of narrow band-pass IR sensors), and correlating LIDAR and IR cloud cover measurements. This talk will describe the instrument as we envision it.
Detection of hydrogen sulfide above the clouds in Uranus's atmosphere
NASA Astrophysics Data System (ADS)
Irwin, Patrick G. J.; Toledo, Daniel; Garland, Ryan; Teanby, Nicholas A.; Fletcher, Leigh N.; Orton, Glenn A.; Bézard, Bruno
2018-04-01
Visible-to-near-infrared observations indicate that the cloud top of the main cloud deck on Uranus lies at a pressure level of between 1.2 bar and 3 bar. However, its composition has never been unambiguously identified, although it is widely assumed to be composed primarily of either ammonia or hydrogen sulfide (H2S) ice. Here, we present evidence of a clear detection of gaseous H2S above this cloud deck in the wavelength region 1.57-1.59 μm with a mole fraction of 0.4-0.8 ppm at the cloud top. Its detection constrains the deep bulk sulfur/nitrogen abundance to exceed unity (>4.4-5.0 times the solar value) in Uranus's bulk atmosphere, and places a lower limit on the mole fraction of H2S below the observed cloud of (1.0 -2.5 ) ×1 0-5. The detection of gaseous H2S at these pressure levels adds to the weight of evidence that the principal constituent of 1.2-3-bar cloud is likely to be H2S ice.
Detection of hydrogen sulfide above the clouds in Uranus's atmosphere
NASA Astrophysics Data System (ADS)
Irwin, Patrick G. J.; Toledo, Daniel; Garland, Ryan; Teanby, Nicholas A.; Fletcher, Leigh N.; Orton, Glenn A.; Bézard, Bruno
2018-05-01
Visible-to-near-infrared observations indicate that the cloud top of the main cloud deck on Uranus lies at a pressure level of between 1.2 bar and 3 bar. However, its composition has never been unambiguously identified, although it is widely assumed to be composed primarily of either ammonia or hydrogen sulfide (H2S) ice. Here, we present evidence of a clear detection of gaseous H2S above this cloud deck in the wavelength region 1.57-1.59 μm with a mole fraction of 0.4-0.8 ppm at the cloud top. Its detection constrains the deep bulk sulfur/nitrogen abundance to exceed unity (>4.4-5.0 times the solar value) in Uranus's bulk atmosphere, and places a lower limit on the mole fraction of H2S below the observed cloud of (1.0 -2.5 ) ×1 0-5. The detection of gaseous H2S at these pressure levels adds to the weight of evidence that the principal constituent of 1.2-3-bar cloud is likely to be H2S ice.
NASA Astrophysics Data System (ADS)
Tao, Yu; Muller, Jan-Peter
2013-04-01
The ESA ExoMars 2018 rover is planned to perform autonomous science target selection (ASTS) using the approaches described in [1]. However, the approaches shown to date have focused on coarse features rather than the identification of specific geomorphological units. These higher-level "geoobjects" can later be employed to perform intelligent reasoning or machine learning. In this work, we show the next stage in the ASTS through examples displaying the identification of bedding planes (not just linear features in rock-face images) and the identification and discrimination of rocks in a rock-strewn landscape (not just rocks). We initially detect the layers and rocks in 2D processing via morphological gradient detection [1] and graph cuts based segmentation [2] respectively. To take this further requires the retrieval of 3D point clouds and the combined processing of point clouds and images for reasoning about the scene. An example is the differentiation of rocks in rover images. This will depend on knowledge of range and range-order of features. We show demonstrations of these "geo-objects" using MER and MSL (released through the PDS) as well as data collected within the EU-PRoViScout project (http://proviscout.eu). An initial assessment will be performed of the automated "geo-objects" using the OpenSource StereoViewer developed within the EU-PRoViSG project (http://provisg.eu) which is released in sourceforge. In future, additional 3D measurement tools will be developed within the EU-FP7 PRoViDE2 project, which started on 1.1.13. References: [1] M. Woods, A. Shaw, D. Barnes, D. Price, D. Long, D. Pullan, (2009) "Autonomous Science for an ExoMars Rover-Like Mission", Journal of Field Robotics Special Issue: Special Issue on Space Robotics, Part II, Volume 26, Issue 4, pages 358-390. [2] J. Shi, J. Malik, (2000) "Normalized Cuts and Image Segmentation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 22. [3] D. Shin, and J.-P. Muller (2009), Stereo workstation for Mars rover image analysis, in EPSC (Europlanets), Potsdam, Germany, EPSC2009-390
APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels
NASA Astrophysics Data System (ADS)
Klüser, L.; Killius, N.; Gesell, G.
2015-04-01
The cloud processing scheme APOLLO (Avhrr Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. While building upon the physical principles having served well in the original APOLLO a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is not performed as a binary yes/no decision based on these physical principals but is expressed as cloud probability for each satellite pixel. Consequently the outcome of the algorithm can be tuned from clear confident to cloud confident depending on the purpose. The probabilistic approach allows to retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for the application with large amounts of historical satellite data. Thus the radiative transfer solution is approximated by the same two stream approach which also had been used for the original APOLLO. This allows the algorithm to be robust enough for being applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e. within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from on NOAA-18 are presented.
Automated detection of fundus photographic red lesions in diabetic retinopathy.
Larsen, Michael; Godt, Jannik; Larsen, Nicolai; Lund-Andersen, Henrik; Sjølie, Anne Katrin; Agardh, Elisabet; Kalm, Helle; Grunkin, Michael; Owens, David R
2003-02-01
To compare a fundus image-analysis algorithm for automated detection of hemorrhages and microaneurysms with visual detection of retinopathy in patients with diabetes. Four hundred fundus photographs (35-mm color transparencies) were obtained in 200 eyes of 100 patients with diabetes who were randomly selected from the Welsh Community Diabetic Retinopathy Study. A gold standard reference was defined by classifying each patient as having or not having diabetic retinopathy based on overall visual grading of the digitized transparencies. A single-lesion visual grading was made independently, comprising meticulous outlining of all single lesions in all photographs and used to develop the automated red lesion detection system. A comparison of visual and automated single-lesion detection in replicating the overall visual grading was then performed. Automated red lesion detection demonstrated a specificity of 71.4% and a resulting sensitivity of 96.7% in detecting diabetic retinopathy when applied at a tentative threshold setting for use in diabetic retinopathy screening. The accuracy of 79% could be raised to 85% by adjustment of a single user-supplied parameter determining the balance between the screening priorities, for which a considerable range of options was demonstrated by the receiver-operating characteristic (area under the curve 90.3%). The agreement of automated lesion detection with overall visual grading (0.659) was comparable to the mean agreement of six ophthalmologists (0.648). Detection of diabetic retinopathy by automated detection of single fundus lesions can be achieved with a performance comparable to that of experienced ophthalmologists. The results warrant further investigation of automated fundus image analysis as a tool for diabetic retinopathy screening.
An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques
2018-01-09
ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER
NASA Astrophysics Data System (ADS)
Yang, Xin; Zhong, Shiquan; Sun, Han; Tan, Zongkun; Li, Zheng; Ding, Meihua
Based on analyzing of the physical characteristics of cloud and importance of cloud in agricultural production and national economy, cloud is a very important climatic resources such as temperature, precipitation and solar radiation. Cloud plays a very important role in agricultural climate division .This paper analyzes methods of cloud detection based on MODIS data in China and Abroad . The results suggest that Quanjun He method is suitable to detect cloud in Guangxi. State chart of cloud cover in Guangxi is imaged by using Quanjun He method .We find out the approach of calculating cloud covered rate by using the frequency spectrum analysis. At last, the Guangxi is obtained. Taking Rongxian County Guangxi as an example, this article analyze the preliminary application of cloud covered rate in distribution of Rong Shaddock pomelo . Analysis results indicate that cloud covered rate is closely related to quality of Rong Shaddock pomelo.
Evaluation and Applications of Cloud Climatologies from CALIOP
NASA Technical Reports Server (NTRS)
Winker, David; Getzewitch, Brian; Vaughan, Mark
2008-01-01
Clouds have a major impact on the Earth radiation budget and differences in the representation of clouds in global climate models are responsible for much of the spread in predicted climate sensitivity. Existing cloud climatologies, against which these models can be tested, have many limitations. The CALIOP lidar, carried on the CALIPSO satellite, has now acquired over two years of nearly continuous cloud and aerosol observations. This dataset provides an improved basis for the characterization of 3-D global cloudiness. Global average cloud cover measured by CALIOP is about 75%, significantly higher than for existing cloud climatologies due to the sensitivity of CALIOP to optically thin cloud. Day/night biases in cloud detection appear to be small. This presentation will discuss detection sensitivity and other issues associated with producing a cloud climatology, characteristics of cloud cover statistics derived from CALIOP data, and applications of those statistics.
Introducing two Random Forest based methods for cloud detection in remote sensing images
NASA Astrophysics Data System (ADS)
Ghasemian, Nafiseh; Akhoondzadeh, Mehdi
2018-07-01
Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.
2011-02-24
ISS026-E-029435 (24 Feb. 2011) --- Backdropped by a cloud-covered part of Earth, the European Space Agency's "Johannes Kepler" Automated Transfer Vehicle-2 (ATV-2) approaches the International Space Station. Docking of the two spacecraft occurred at 10:59 a.m. (EST) on Feb. 24, 2011.
Detecting Distributed SQL Injection Attacks in a Eucalyptus Cloud Environment
NASA Technical Reports Server (NTRS)
Kebert, Alan; Barnejee, Bikramjit; Solano, Juan; Solano, Wanda
2013-01-01
The cloud computing environment offers malicious users the ability to spawn multiple instances of cloud nodes that are similar to virtual machines, except that they can have separate external IP addresses. In this paper we demonstrate how this ability can be exploited by an attacker to distribute his/her attack, in particular SQL injection attacks, in such a way that an intrusion detection system (IDS) could fail to identify this attack. To demonstrate this, we set up a small private cloud, established a vulnerable website in one instance, and placed an IDS within the cloud to monitor the network traffic. We found that an attacker could quite easily defeat the IDS by periodically altering its IP address. To detect such an attacker, we propose to use multi-agent plan recognition, where the multiple source IPs are considered as different agents who are mounting a collaborative attack. We show that such a formulation of this problem yields a more sophisticated approach to detecting SQL injection attacks within a cloud computing environment.
Comparison of the MODIS Collection 5 Multilayer Cloud Detection Product with CALIPSO
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Gala; King, Michael D.; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.
2010-01-01
CALIPSO, launched in June 2006, provides global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the Collection 5 scream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 km resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, we investigate the global performance of multilayer cloud detection algorithms (and thermodynamic phase).
Satellite Remote Sensing Tools at the Alaska Volcano Observatory
NASA Astrophysics Data System (ADS)
Dehn, J.; Dean, K.; Webley, P.; Bailey, J.; Valcic, L.
2008-12-01
Volcanoes rarely conform to schedules or convenience. This is even more the case for remote volcanoes that still have impact on local infrastructure and air traffic. With well over 100 eruptions in the North Pacific over 20 years, the Alaska Volcano Observatory has developed a series of web-based tools to rapidly assess satellite imagery of volcanic eruptions from virtually anywhere. These range from automated alarms systems to detect thermal anomalies and ash plumes at volcanoes, as well as efficient image processing that can be done at a moments notice from any computer linked to the internet. The thermal anomaly detection algorithm looks for warm pixels several standard deviations above the background as well as pixels which show stronger mid infrared (3-5 microns) signals relative to available thermal channels (10-12 microns). The ash algorithm primarily uses the brightness temperature difference of two thermal bands, but also looks for shape of clouds and noise elimination. The automated algorithms are far from perfect, with 60-70% success rates, but improve with each eruptions. All of the data is available to the community online in a variety of forms which provide rudimentary processing. The website, avo-animate.images.alaska.edu, is designed for use by AVO's partners and "customers" to provide quick synoptic views of volcanic activity. These tools also have been essential in AVO's efforts in recent years and provide a model for rapid response to eruptions at distant volcanoes anywhere in the world. animate.images.alaska.edu
SLAE–CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies
Ma, Jing; Wang, Qiang; Zhao, Zhibiao
2017-01-01
In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE–CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE–CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE–CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology. PMID:28657577
SLAE-CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies.
Ma, Jing; Wang, Qiang; Zhao, Zhibiao
2017-06-28
In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE-CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE-CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE-CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology.
Improving the Accuracy of Cloud Detection Using Machine Learning
NASA Astrophysics Data System (ADS)
Craddock, M. E.; Alliss, R. J.; Mason, M.
2017-12-01
Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.
NASA Technical Reports Server (NTRS)
Hussey, K. J.; Hall, J. R.; Mortensen, R. A.
1986-01-01
Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center are working on a multi-year Collaborative Research and Development Agreement.With the knowledge developed in the first year on how to provision and manage a federation of virtual machines through Cloud management systems. In this second year, we expanded the work on provisioning and federation, increasing both scale and diversity of solutions, and we started to build on-demand services on the established fabric, introducing the paradigm of Platform as a Service to assist with the execution of scientific workflows. We have enabled scientific workflows ofmore » stakeholders to run on multiple cloud resources at the scale of 1,000 concurrent machines. The demonstrations have been in the areas of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federation of Cloud Resources, and (c) On-demand Services for ScientificWorkflows.« less
A Robotic Platform for Corn Seedling Morphological Traits Characterization
Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu
2017-01-01
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892
A Robotic Platform for Corn Seedling Morphological Traits Characterization.
Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu
2017-09-12
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.
APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels
NASA Astrophysics Data System (ADS)
Klüser, L.; Killius, N.; Gesell, G.
2015-10-01
The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. It builds upon the physical principles that have served well in the original APOLLO scheme. Nevertheless, a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is no longer performed as a binary yes/no decision based on these physical principles. It is rather expressed as cloud probability for each satellite pixel. Consequently, the outcome of the algorithm can be tuned from being sure to reliably identify clear pixels to conditions of reliably identifying definitely cloudy pixels, depending on the purpose. The probabilistic approach allows retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for application to large amounts of historical satellite data. The radiative transfer solution is approximated by the same two-stream approach which also had been used for the original APOLLO. This allows the algorithm to be applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e., within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from NOAA-18 are presented.
NASA Astrophysics Data System (ADS)
NOH, Y. J.; Miller, S. D.; Heidinger, A. K.
2015-12-01
Many studies have demonstrated the utility of multispectral information from satellite passive radiometers for detecting and retrieving the properties of cloud globally, which conventionally utilizes shortwave- and thermal-infrared bands. However, the satellite-derived cloud information comes mainly from cloud top or represents a vertically integrated property. This can produce a large bias in determining cloud phase characteristics, in particular for mixed-phase clouds which are often observed to have supercooled liquid water at cloud top but a predominantly ice phase residing below. The current satellite retrieval algorithms may report these clouds simply as supercooled liquid without any further information regarding the presence of a sub-cloud-top ice phase. More accurate characterization of these clouds is very important for climate models and aviation applications. In this study, we present a physical basis and preliminary results for the algorithm development of supercooled liquid-topped mixed-phase cloud detection using satellite radiometer observations. The detection algorithm is based on differential absorption properties between liquid and ice particles in the shortwave-infrared bands. Solar reflectance data in narrow bands at 1.6 μm and 2.25 μm are used to optically probe below clouds for distinction between supercooled liquid-topped clouds with and without an underlying mixed phase component. Varying solar/sensor geometry and cloud optical properties are also considered. The spectral band combination utilized for the algorithm is currently available on Suomi NPP Visible/Infrared Imaging Radiometer Suite (VIIRS), Himawari-8 Advanced Himawari Imager (AHI), and the future GOES-R Advance Baseline Imager (ABI). When tested on simulated cloud fields from WRF model and synthetic ABI data, favorable results were shown with reasonable threat scores (0.6-0.8) and false alarm rates (0.1-0.2). An ARM/NSA case study applied to VIIRS data also indicated promising potential of the algorithm.
4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas
2016-04-01
The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trepte, Q.Z.; Minnis, P.; Heck, P.W.
2005-03-18
Cloud detection using satellite measurements presents a big challenge near the terminator where the visible (VIS; 0.65 {micro}m) channel becomes less reliable and the reflected solar component of the solar infrared 3.9-{micro}m channel reaches very low signal-to-noise ratio levels. As a result, clouds are underestimated near the terminator and at night over land and ocean in previous Atmospheric Radiation Measurement (ARM) Program cloud retrievals using Geostationary Operational Environmental Satellite (GOES) imager data. Cloud detection near the terminator has always been a challenge. For example, comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer [AVHRR]) cloud coverage and Geosciencemore » Laser Altimeter System (GLAS) measurements north of 60{sup o}N indicate significant amounts of missing clouds from AVHRR because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products and GLAS at the same regions also shows the same difficulty in the MODIS cloud retrieval (Pavolonis and Heidinger 2005). Consistent detection of clouds at all times of day is needed to provide reliable cloud and radiation products for ARM and other research efforts involving the modeling of clouds and their interaction with the radiation budget. To minimize inconsistencies between daytime and nighttime retrievals, this paper develops an improved twilight and nighttime cloud mask using GOES-9, 10, and 12 imager data over the ARM sites and the continental United States (CONUS).« less
NASA Technical Reports Server (NTRS)
Trepte, Q. Z.; Minnis, P.; Heck, R. W.; Palikonda, R.
2005-01-01
Cloud detection using satellite measurements presents a big challenge near the terminator where the visible (VIS; 0.65 (micro)m) channel becomes less reliable and the reflected solar component of the solar infrared 3.9-(micro)m channel reaches very low signal-to-noise ratio levels. As a result, clouds are underestimated near the terminator and at night over land and ocean in previous Atmospheric Radiation Measurement (ARM) Program cloud retrievals using Geostationary Operational Environmental Satellite (GOES) imager data. Cloud detection near the terminator has always been a challenge. For example, comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer (AVHRR)) cloud coverage and Geoscience Laser Altimeter System (GLAS) measurements north of 60 degrees N indicate significant amounts of missing clouds from AVHRR because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products and GLAS at the same regions also shows the same difficulty in the MODIS cloud retrieval (Pavolonis and Heidinger 2005). Consistent detection of clouds at all times of day is needed to provide reliable cloud and radiation products for ARM and other research efforts involving the modeling of clouds and their interaction with the radiation budget. To minimize inconsistencies between daytime and nighttime retrievals, this paper develops an improved twilight and nighttime cloud mask using GOES-9, 10, and 12 imager data over the ARM sites and the continental United States (CONUS).
A new cloud and aerosol layer detection method based on micropulse lidar measurements
NASA Astrophysics Data System (ADS)
Zhao, Chuanfeng; Wang, Yuzhao; Wang, Qianqian; Li, Zhanqing; Wang, Zhien; Liu, Dong
2014-06-01
This paper introduces a new algorithm to detect aerosols and clouds based on micropulse lidar measurements. A semidiscretization processing technique is first used to inhibit the impact of increasing noise with distance. The value distribution equalization method which reduces the magnitude of signal variations with distance is then introduced. Combined with empirical threshold values, we determine if the signal waves indicate clouds or aerosols. This method can separate clouds and aerosols with high accuracy, although differentiation between aerosols and clouds are subject to more uncertainties depending on the thresholds selected. Compared with the existing Atmospheric Radiation Measurement program lidar-based cloud product, the new method appears more reliable and detects more clouds with high bases. The algorithm is applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu sites. At the SGP site, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring and shows bimodal vertical distributions with maximum occurrences at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. The dominant clouds are stratiform in winter and convective in summer. By contrast, the cloud frequency at the Taihu site shows no clear seasonal variation and the maximum occurrence is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at the SGP site. A seasonal analysis of cloud base occurrence frequency suggests that stratiform clouds dominate at the Taihu site.
NASA Astrophysics Data System (ADS)
Kovalskyy, V.; Roy, D. P.
2014-12-01
The successful February 2013 launch of the Landsat 8 satellite is continuing the 40+ year legacy of the Landsat mission. The payload includes the Operational Land Imager (OLI) that has a new 1370 mm band designed to monitor cirrus clouds and the Thermal Infrared Sensor (TIRS) that together provide 30m low, medium and high confidence cloud detections and 30m low and high confidence cirrus cloud detections. A year of Landsat 8 data over the Conterminous United States (CONUS), composed of 11,296 acquisitions, was analyzed comparing the spatial and temporal incidence of these cloud and cirrus states. This revealed (i) 36.5% of observations were detected with high confidence cloud with spatio-temporal patterns similar to those observed by previous Landsat 7 cloud analyses, (ii) 29.2% were high confidence cirrus, (iii) 20.9% were both high confidence cloud and high confidence cirrus, (iv) 8.3% were detected as high confidence cirrus but not as high confidence cloud. The results illustrate the value of the cirrus band for improved Landsat 8 terrestrial monitoring but imply that the historical CONUS Landsat archive has a similar 8% of undetected cirrus contaminated pixels. The implications for long term Landsat time series records, including the global Web Enabled Landsat Data (WELD) product record, are discussed.
NASA Technical Reports Server (NTRS)
Kalia, Subodh; Ganguly, Sangram; Li, Shuang; Nemani, Ramakrishna R.
2017-01-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remote sensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud shadow mask from geostationary satellite data is critical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds,which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classify cloudshadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoderdecoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multispectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
Driver Vigilance in Automated Vehicles: Hazard Detection Failures Are a Matter of Time.
Greenlee, Eric T; DeLucia, Patricia R; Newton, David C
2018-06-01
The primary aim of the current study was to determine whether monitoring the roadway for hazards during automated driving results in a vigilance decrement. Although automated vehicles are relatively novel, the nature of human-automation interaction within them has the classic hallmarks of a vigilance task. Drivers must maintain attention for prolonged periods of time to detect and respond to rare and unpredictable events, for example, roadway hazards that automation may be ill equipped to detect. Given the similarity with traditional vigilance tasks, we predicted that drivers of a simulated automated vehicle would demonstrate a vigilance decrement in hazard detection performance. Participants "drove" a simulated automated vehicle for 40 minutes. During that time, their task was to monitor the roadway for roadway hazards. As predicted, hazard detection rate declined precipitously, and reaction times slowed as the drive progressed. Further, subjective ratings of workload and task-related stress indicated that sustained monitoring is demanding and distressing and it is a challenge to maintain task engagement. Monitoring the roadway for potential hazards during automated driving results in workload, stress, and performance decrements similar to those observed in traditional vigilance tasks. To the degree that vigilance is required of automated vehicle drivers, performance errors and associated safety risks are likely to occur as a function of time on task. Vigilance should be a focal safety concern in the development of vehicle automation.
Modeling the Diffuse Cloud-Top Optical Emissions from Ground and Cloud Flashes
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard; Koshak, William
2008-01-01
A number of studies have indicated that the diffuse cloud-top optical emissions from intra-cloud (IC) lightning are brighter than that from normal negative cloud-to-ground (CG) lightning, and hence would be easier to detect from a space-based sensor. The primary reason provided to substantiate this claim has been that the IC is at a higher altitude within the cloud and therefore is less obscured by the cloud multiple scattering medium. CGs at lower altitudes embedded deep within the cloud are more obscured, so CG detection is thought to be more difficult. However, other authors claim that because the CG source current (and hence luminosity) is typically substantially larger than IC currents, the greater CG source luminosity is large enough to overcome the effects of multiple scattering. These investigators suggest that the diffuse cloud top emissions from CGs are brighter than from ICs, and hence are easier to detect from space. Still other investigators claim that the detection efficiency of CGs and ICs is about the same because modern detector sensitivity is good enough to "see" either flash type no matter which produces a brighter cloud top emission. To better assess which of these opinions should be accepted, we introduce an extension of a Boltzmann lightning radiative transfer model previously developed. It considers characteristics of the cloud (geometry, dimensions, scattering properties) and specific lightning channel properties (length, geometry, location, current, optical wave front propagation speed/direction). As such, it represents the most detailed modeling effort to date. At least in the few cases studied thus far, it was found that IC flashes appear brighter at cloud top than the lower altitude negative ground flashes, but additional model runs are to be examined before finalizing our general conclusions.
Cloud Statistics and Discrimination in the Polar Regions
NASA Astrophysics Data System (ADS)
Chan, M.; Comiso, J. C.
2012-12-01
Despite their important role in the climate system, cloud cover and their statistics are poorly known, especially in the polar regions, where clouds are difficult to discriminate from snow covered surfaces. The advent of the A-train, which included Aqua/MODIS, CALIPSO/CALIOP and CloudSat/CPR sensors has provided an opportunity to improve our ability to accurately characterize the cloud cover. MODIS provides global coverage at a relatively good temporal and spatial resolution while CALIOP and CPR provide limited nadir sampling but accurate characterization of the vertical structure and phase of the cloud cover. Over the polar regions, cloud detection from a passive sensors like MODIS is challenging because of the presence of cold and highly reflective surfaces such as snow, sea-ice, glaciers, and ice-sheet, which have surface signatures similar to those of clouds. On the other hand, active sensors such as CALIOP and CPR are not only very sensitive to the presence of clouds but can also provide information about its microphysical characteristics. However, these nadir-looking sensors have sparse spatial coverage and their global data can have data spatial gaps of up to 100 km. We developed a polar cloud detection system for MODIS that is trained using collocated data from CALIOP and CPR. In particular, we employ a machine learning system that reads the radiative profile observed by MODIS and determine whether the field of view is cloudy or clear. Results have shown that the improved cloud detection scheme performs better than typical cloud mask algorithms using a validation data set not used for training. A one-year data set was generated and results indicate that daytime cloud detection accuracies improved from 80.1% to 92.6% (over sea-ice) and 71.2% to 87.4% (over ice-sheet) with CALIOP data used as the baseline. Significant improvements are also observed during nighttime, where cloud detection accuracies increase by 19.8% (over sea-ice) and 11.6% (over ice-sheet). The immediate impact of the new algorithm is that it can minimize large biases of MODIS-derived cloud amount over the Polar Regions and thus a more realistic and high quality global cloud statistics. In particular, our results show that cloud fraction in the Arctic is typically 81.2 % during daytime and 84.0% during nighttime. This is significantly higher than the 71.8% and 58.5%, respectively, derived from standard MODIS cloud product.
NASA Astrophysics Data System (ADS)
Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin
2013-11-01
The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.
Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas
2015-02-10
Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.
Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas
2010-01-01
This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).
Lightning studies using LDAR and LLP data
NASA Technical Reports Server (NTRS)
Forbes, Gregory S.
1993-01-01
This study intercompared lightning data from LDAR and LLP systems in order to learn more about the spatial relationships between thunderstorm electrical discharges aloft and lightning strikes to the surface. The ultimate goal of the study is to provide information that can be used to improve the process of real-time detection and warning of lightning by weather forecasters who issue lightning advisories. The Lightning Detection and Ranging (LDAR) System provides data on electrical discharges from thunderstorms that includes cloud-ground flashes as well as lightning aloft (within cloud, cloud-to-cloud, and sometimes emanating from cloud to clear air outside or above cloud). The Lightning Location and Protection (LLP) system detects primarily ground strikes from lightning. Thunderstorms typically produce LDAR signals aloft prior to the first ground strike, so that knowledge of preferred positions of ground strikes relative to the LDAR data pattern from a thunderstorm could allow advance estimates of enhanced ground strike threat. Studies described in the report examine the position of LLP-detected ground strikes relative to the LDAR data pattern from the thunderstorms. The report also describes other potential approaches to the use of LDAR data in the detection and forecasting of lightning ground strikes.
A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements
NASA Technical Reports Server (NTRS)
Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.
2016-01-01
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.
Looking Down Through the Clouds – Optical Attenuation through Real-Time Clouds
NASA Astrophysics Data System (ADS)
Burley, J.; Lazarewicz, A.; Dean, D.; Heath, N.
Detecting and identifying nuclear explosions in the atmosphere and on the surface of the Earth is critical for the Air Force Technical Applications Center (AFTAC) treaty monitoring mission. Optical signals, from surface or atmospheric nuclear explosions detected by satellite sensors, are attenuated by the atmosphere and clouds. Clouds present a particularly complex challenge as they cover up to seventy percent of the earth's surface. Moreover, their highly variable and diverse nature requires physics-based modeling. Determining the attenuation for each optical ray-path is uniquely dependent on the source geolocation, the specific optical transmission characteristics along that ray path, and sensor detection capabilities. This research details a collaborative AFTAC and AFIT effort to fuse worldwide weather data, from a variety of sources, to provide near-real-time profiles of atmospheric and cloud conditions and the resulting radiative transfer analysis for virtually any wavelength(s) of interest from source to satellite. AFIT has developed a means to model global clouds using the U.S. Air Force’s World Wide Merged Cloud Analysis (WWMCA) cloud data in a new toolset that enables radiance calculations through clouds from UV to RF wavelengths.
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
Rai, Rashmi; Sahoo, Gadadhar; Mehfuz, Shabana
2015-01-01
Today, most of the organizations trust on their age old legacy applications, to support their business-critical systems. However, there are several critical concerns, as maintainability and scalability issues, associated with the legacy system. In this background, cloud services offer a more agile and cost effective platform, to support business applications and IT infrastructure. As the adoption of cloud services has been increasing recently and so has been the academic research in cloud migration. However, there is a genuine need of secondary study to further strengthen this research. The primary objective of this paper is to scientifically and systematically identify, categorize and compare the existing research work in the area of legacy to cloud migration. The paper has also endeavored to consolidate the research on Security issues, which is prime factor hindering the adoption of cloud through classifying the studies on secure cloud migration. SLR (Systematic Literature Review) of thirty selected papers, published from 2009 to 2014 was conducted to properly understand the nuances of the security framework. To categorize the selected studies, authors have proposed a conceptual model for cloud migration which has resulted in a resource base of existing solutions for cloud migration. This study concludes that cloud migration research is in seminal stage but simultaneously it is also evolving and maturing, with increasing participation from academics and industry alike. The paper also identifies the need for a secure migration model, which can fortify organization's trust into cloud migration and facilitate necessary tool support to automate the migration process.
Detection of long duration cloud contamination in hyper-temporal NDVI imagery
NASA Astrophysics Data System (ADS)
Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.
2012-04-01
NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.
Powerful Hurricane Irma Seen in 3D by NASA's CloudSat
2017-09-08
NASA's CloudSat satellite flew over Hurricane Irma on Sept. 6, 2017 at 1:45 p.m. EDT (17:45 UTC) as the storm was approaching Puerto Rico in the Atlantic Ocean. Hurricane Irma contained estimated maximum sustained winds of 185 miles per hour (160 knots) with a minimum pressure of 918 millibars. CloudSat transected the eastern edge of Hurricane Irma's eyewall, revealing details of the storm's cloud structure beneath its thick canopy of cirrus clouds. The CloudSat Cloud Profiling Radar excels in detecting the organization and placement of cloud layers beneath a storm's cirrus canopy, which are not readily detected by other satellite sensors. The CloudSat overpass reveals the inner details beneath the cloud tops of this large system; intense areas of convection with moderate to heavy rainfall (deep red and pink colors), cloud-free areas (moats) in between the inner and outer cloud bands of Hurricane Irma and cloud top heights averaging around 9 to 10 miles (15 to 16 kilometers). Lower values of reflectivity (areas of green and blue) denote smaller-sized ice and water particle sizes typically located at the top of a storm system (in the anvil area). The Cloud Profiling Radar loses signal at around 3 miles (5 kilometers) in height (in the melting layer) due to water (ice) particles larger than 0.12 inches (3 millimeters) in diameter. Moderate to heavy rainfall occurs in these areas where signal weakening is detectable. Smaller cumulus and cumulonimbus cloud types are evident as CloudSat moves farther south, beneath the thick cirrus canopy. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21947
NASA Astrophysics Data System (ADS)
Miles, Katie; Willis, Ian; Benedek, Corinne; Williamson, Andrew; Tedesco, Marco
2017-04-01
Supraglacial lakes (SGLs) on the Greenland Ice Sheet (GrIS) are an important component of the ice sheet's mass balance and hydrology, with their drainage affecting ice dynamics. This study uses imagery from the recently launched Sentinel-1A Synthetic Aperture Radar (SAR) to investigate SGLs in West Greenland. SAR can image through cloud and in darkness, overcoming some of the limitations of commonly used optical sensors. A semi automated algorithm is developed to detect surface lakes from Sentinel images during the 2015 summer. It generally detects water in all locations where a Landsat-8 NDWI classification (with a relatively high threshold value) detects water. A combined set of images from Landsat-8 and Sentinel-1 is used to track lake behaviour at a comparable temporal resolution to that which is possible with MODIS, but at a higher spatial resolution. A fully automated lake drainage detection algorithm is used to investigate both rapid and slow drainages for both small and large lakes through the summer. Our combined Landsat-Sentinel dataset, with a temporal resolution of three days, could track smaller lakes (mean 0.089 km2) than are resolvable in MODIS (minimum 0.125 km2). Small lake drainage events (lakes smaller than can be detected using MODIS) were found to occur at lower elevations ( 200 m) and slightly earlier in the melt season than larger events, as were slow lake drainage events compared to rapid events. The Sentinel imagery allows the analysis to be extended manually into the early winter to calculate the dates and elevations of lake freeze-through more precisely than is possible with optical imagery (mean 30 August, 1270 m mean elevation). Finally, the Sentinel imagery allows subsurface lakes (which are invisible to optical sensors) to be detected, and, for the first time, their dates of appearance and freeze-through to be calculated (mean 9 August and 7 October, respectively). These subsurface lakes occur at higher elevations than the surface lakes detected in this study (1593 m mean elevation). Sentinel imagery therefore provides great potential for tracking melting, water movement and freezing within the firn zone of the GrIS.
BioBlocks: Programming Protocols in Biology Made Easier.
Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso
2017-07-21
The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.
Detection and tracking of gas plumes in LWIR hyperspectral video sequence data
NASA Astrophysics Data System (ADS)
Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.
2013-05-01
Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.
Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc
2013-01-01
The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.
A Study of Global Cirrus Cloud Morphology with AIRS Cloud-clear Radiances (CCRs)
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Gong, Jie
2012-01-01
Version 6 (V6) AIRS cloud-clear radiances (CCR) are used to derive cloud-induced radiance (Tcir=Tb-CCR) at the infrared frequencies of weighting functions peaked in the middle troposphere. The significantly improved V 6 CCR product allows a more accurate estimation of the expected clear-sky radiance as if clouds are absent. In the case where strong cloud scattering is present, the CCR becomes unreliable, which is reflected by its estimated uncertainty, and interpolation is employed to replace this CCR value. We find that Tcir derived from this CCR method are much better than other methods and detect more clouds in the upper and lower troposphere as well as in the polar regions where cloud detection is particularly challenging. The cloud morphology derived from the V6 test month, as well as some artifacts, will be shown.
Very high cloud detection in more than two decades of HIRS data
NASA Astrophysics Data System (ADS)
Kolat, Utkan; Menzel, W. Paul; Olson, Erik; Frey, Richard
2013-04-01
This paper reports on the use of High-resolution Infrared Radiation Sounder (HIRS) measurements to infer the presence of upper tropospheric and lower stratospheric (UT/LS) clouds. UT/LS cloud detection is based on the fact that, when viewing an opaque UT/LS cloud that fills the sensor field of view, positive lapse rates above the tropopause cause a more absorbing CO2 or H2O-sensitive spectral band to measure a brightness temperature warmer than that of a less absorbing or nearly transparent infrared window spectral band. The HIRS sensor has flown on 16 polar-orbiting satellites from TIROS-N through NOAA-19 and Metop-A and -B, forming the only 30 year record that includes H2O and CO2-sensitive spectral bands enabling the detection of these UT/LS clouds. Comparison with collocated Cloud-Aerosol Lidar with Orthogonal Polarization data reveals that 97% of the HIRS UT/LS cloud determinations are within 2.5 km of the tropopause (defined as the coldest level in the National Centers for Environmental Prediction Global Data Assimilation System); more clouds are found above the tropopause than below. From NOAA-14 data spanning 1995 through 2005, we find indications of UT/LS clouds in 0.7% of the observations from 60N to 60S using CO2 absorption bands; however, in the region of the Inter-Tropical Convergence Zone (ITCZ), this increases to 1.7%. During El Niño years, UT/LS clouds shift eastward out of their normal location in the western Pacific region. Monthly trends from 1987 through 2011 using data from NOAA-10 onwards show decreases in UT/LS cloud detection in the region of the ITCZ from 1987 until 1996, increases until 2001, and decreases thereafter.
Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
External Influences on Modeled and Observed Cloud Trends
NASA Technical Reports Server (NTRS)
Marvel, Kate; Zelinka, Mark; Klein, Stephen A.; Bonfils, Celine; Caldwell, Peter; Doutriaux, Charles; Santer, Benjamin D.; Taylor, Karl E.
2015-01-01
Understanding the cloud response to external forcing is a major challenge for climate science. This crucial goal is complicated by intermodel differences in simulating present and future cloud cover and by observational uncertainty. This is the first formal detection and attribution study of cloud changes over the satellite era. Presented herein are CMIP5 (Coupled Model Intercomparison Project - Phase 5) model-derived fingerprints of externally forced changes to three cloud properties: the latitudes at which the zonally averaged total cloud fraction (CLT) is maximized or minimized, the zonal average CLT at these latitudes, and the height of high clouds at these latitudes. By considering simultaneous changes in all three properties, the authors define a coherent multivariate fingerprint of cloud response to external forcing and use models from phase 5 of CMIP (CMIP5) to calculate the average time to detect these changes. It is found that given perfect satellite cloud observations beginning in 1983, the models indicate that a detectable multivariate signal should have already emerged. A search is then made for signals of external forcing in two observational datasets: ISCCP (International Satellite Cloud Climatology Project) and PATMOS-x (Advanced Very High Resolution Radiometer (AVHRR) Pathfinder Atmospheres - Extended). The datasets are both found to show a poleward migration of the zonal CLT pattern that is incompatible with forced CMIP5 models. Nevertheless, a detectable multivariate signal is predicted by models over the PATMOS-x time period and is indeed present in the dataset. Despite persistent observational uncertainties, these results present a strong case for continued efforts to improve these existing satellite observations, in addition to planning for new missions.
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Casting Light and Shadows on a Saharan Dust Storm
NASA Technical Reports Server (NTRS)
2003-01-01
On March 2, 2003, near-surface winds carried a large amount of Saharan dust aloft and transported the material westward over the Atlantic Ocean. These observations from the Multi-angle Imaging SpectroRadiometer (MISR) aboard NASA's Terra satellite depict an area near the Cape Verde Islands (situated about 700 kilometers off of Africa's western coast) and provide images of the dust plume along with measurements of its height and motion. Tracking the three-dimensional extent and motion of air masses containing dust or other types of aerosols provides data that can be used to verify and improve computer simulations of particulate transport over large distances, with application to enhancing our understanding of the effects of such particles on meteorology, ocean biological productivity, and human health.MISR images the Earth by measuring the spatial patterns of reflected sunlight. In the upper panel of the still image pair, the observations are displayed as a natural-color snapshot from MISR's vertical-viewing (nadir) camera. High-altitude cirrus clouds cast shadows on the underlying ocean and dust layer, which are visible in shades of blue and tan, respectively. In the lower panel, heights derived from automated stereoscopic processing of MISR's multi-angle imagery show the cirrus clouds (yellow areas) to be situated about 12 kilometers above sea level. The distinctive spatial patterns of these clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. For most of the dust layer, which is spatially much more homogeneous, the stereoscopic approach was unable to retrieve elevation data. However, the edges of shadows cast by the cirrus clouds onto the dust (indicated by blue and cyan pixels) provide sufficient spatial contrast for a retrieval of the dust layer's height, and indicate that the top of layer is only about 2.5 kilometers above sea level.Motion of the dust and clouds is directly observable with the assistance of the multi-angle 'fly-over' animation (Below). The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with 70-degree backward image. Much of the south-to-north shift in the position of the clouds is due to geometric parallax between the nine view angles (rather than true motion), whereas the west-to-east motion is due to actual motion of the clouds over the seven minutes during which all nine cameras observed the scene. MISR's automated data processing retrieved a primarily westerly (eastward) motion of these clouds with speeds of 30-40 meters per second. Note that there is much less geometric parallax for the cloud shadows owing to the relatively low altitude of the dust layer upon which the shadows are cast (the amount of parallax is proportional to elevation and a feature at the surface would have no geometric parallax at all); however, the westerly motion of the shadows matches the actual motion of the clouds. The automated processing was not able to resolve a velocity for the dust plume, but by manually tracking dust features within the plume images that comprise the animation sequence we can derive an easterly (westward) speed of about 16 meters per second. These analyses and visualizations of the MISR data demonstrate that not only are the cirrus clouds and dust separated significantly in elevation, but they exist in completely different wind regimes, with the clouds moving toward the east and the dust moving toward the west. [figure removed for brevity, see original site] (Click on image above for high resolution version)The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17040. The panels cover an area of about 312 kilometers x 242 kilometers, and use data from blocks 74 to 77 within World Reference System-2 path 207.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Goatman, Keith; Charnley, Amanda; Webster, Laura; Nussey, Stephen
2011-01-01
To assess the performance of automated disease detection in diabetic retinopathy screening using two field mydriatic photography. Images from 8,271 sequential patient screening episodes from a South London diabetic retinopathy screening service were processed by the Medalytix iGrading™ automated grading system. For each screening episode macular-centred and disc-centred images of both eyes were acquired and independently graded according to the English national grading scheme. Where discrepancies were found between the automated result and original manual grade, internal and external arbitration was used to determine the final study grades. Two versions of the software were used: one that detected microaneurysms alone, and one that detected blot haemorrhages and exudates in addition to microaneurysms. Results for each version were calculated once using both fields and once using the macula-centred field alone. Of the 8,271 episodes, 346 (4.2%) were considered unassessable. Referable disease was detected in 587 episodes (7.1%). The sensitivity of the automated system for detecting unassessable images ranged from 97.4% to 99.1% depending on configuration. The sensitivity of the automated system for referable episodes ranged from 98.3% to 99.3%. All the episodes that included proliferative or pre-proliferative retinopathy were detected by the automated system regardless of configuration (192/192, 95% confidence interval 98.0% to 100%). If implemented as the first step in grading, the automated system would have reduced the manual grading effort by between 2,183 and 3,147 patient episodes (26.4% to 38.1%). Automated grading can safely reduce the workload of manual grading using two field, mydriatic photography in a routine screening service.
Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds
NASA Astrophysics Data System (ADS)
Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.
2008-05-01
CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.
Context-aware distributed cloud computing using CloudScheduler
NASA Astrophysics Data System (ADS)
Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.
2017-10-01
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
NASA Technical Reports Server (NTRS)
Viudez-Mora, Antonio; Kato, Seiji
2015-01-01
This work evaluates the multilayer cloud (MCF) algorithm based on CO2-slicing techniques against CALISPO-CloudSat (CLCS) measurement. This evaluation showed that the MCF underestimates the presence of multilayered clouds compared with CLCS and are retrained to cloud emissivities below 0.8 and cloud optical septs no larger than 0.3.
A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds
NASA Astrophysics Data System (ADS)
Salvaggio, Katie N.
Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.
Machine Learning Algorithms for Automated Satellite Snow and Sea Ice Detection
NASA Astrophysics Data System (ADS)
Bonev, George
The continuous mapping of snow and ice cover, particularly in the arctic and poles, are critical to understanding the earth and atmospheric science. Much of the world's sea ice and snow covers the most inhospitable places, making measurements from satellite-based remote sensors essential. Despite the wealth of data from these instruments many challenges remain. For instance, remote sensing instruments reside on-board different satellites and observe the earth at different portions of the electromagnetic spectrum with different spatial footprints. Integrating and fusing this information to make estimates of the surface is a subject of active research. In response to these challenges, this dissertation will present two algorithms that utilize methods from statistics and machine learning, with the goal of improving on the quality and accuracy of current snow and sea ice detection products. The first algorithm aims at implementing snow detection using optical/infrared instrument data. The novelty in this approach is that the classifier is trained using ground station measurements of snow depth that are collocated with the reflectance observed at the satellite. Several classification methods are compared using this training data to identify the one yielding the highest accuracy and optimal space/time complexity. The algorithm is then evaluated against the current operational NASA snow product and it is found that it produces comparable and in some cases superior accuracy results. The second algorithm presents a fully automated approach to sea ice detection that integrates data obtained from passive microwave and optical/infrared satellite instruments. For a particular region of interest the algorithm generates sea ice maps of each individual satellite overpass and then aggregates them to a daily composite level, maximizing the amount of high resolution information available. The algorithm is evaluated at both, the individual satellite overpass level, and at the daily composite level. Results show that at the single overpass level for clear-sky regions, the developed multi-sensor algorithm performs with accuracy similar to that of the optical/infrared products, with the advantage of being able to also classify partially cloud-obscured regions with the help of passive microwave data. At the daily composite level, results show that the algorithm's performance with respect to total ice extent is in line with other daily products, with the novelty of being fully automated and having higher resolution.
Oosterwijk, J C; Knepflé, C F; Mesker, W E; Vrolijk, H; Sloos, W C; Pattenier, H; Ravkin, I; van Ommen, G J; Kanhai, H H; Tanke, H J
1998-01-01
This article explores the feasibility of the use of automated microscopy and image analysis to detect the presence of rare fetal nucleated red blood cells (NRBCs) circulating in maternal blood. The rationales for enrichment and for automated image analysis for "rare-event" detection are reviewed. We also describe the application of automated image analysis to 42 maternal blood samples, using a protocol consisting of one-step enrichment followed by immunocytochemical staining for fetal hemoglobin (HbF) and FISH for X- and Y-chromosomal sequences. Automated image analysis consisted of multimode microscopy and subsequent visual evaluation of image memories containing the selected objects. The FISH results were compared with the results of conventional karyotyping of the chorionic villi. By use of manual screening, 43% of the slides were found to be positive (>=1 NRBC), with a mean number of 11 NRBCs (range 1-40). By automated microscopy, 52% were positive, with on average 17 NRBCs (range 1-111). There was a good correlation between both manual and automated screening, but the NRBC yield from automated image analysis was found to be superior to that from manual screening (P=.0443), particularly when the NRBC count was >15. Seven (64%) of 11 XY fetuses were correctly diagnosed by FISH analysis of automatically detected cells, and all discrepancies were restricted to the lower cell-count range. We believe that automated microscopy and image analysis reduce the screening workload, are more sensitive than manual evaluation, and can be used to detect rare HbF-containing NRBCs in maternal blood. PMID:9837832
Classification of large-scale fundus image data sets: a cloud-computing framework.
Roychowdhury, Sohini
2016-08-01
Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.
Automated Detection of HONcode Website Conformity Compared to Manual Detection: An Evaluation.
Boyer, Célia; Dolamic, Ljiljana
2015-06-02
To earn HONcode certification, a website must conform to the 8 principles of the HONcode of Conduct In the current manual process of certification, a HONcode expert assesses the candidate website using precise guidelines for each principle. In the scope of the European project KHRESMOI, the Health on the Net (HON) Foundation has developed an automated system to assist in detecting a website's HONcode conformity. Automated assistance in conducting HONcode reviews can expedite the current time-consuming tasks of HONcode certification and ongoing surveillance. Additionally, an automated tool used as a plugin to a general search engine might help to detect health websites that respect HONcode principles but have not yet been certified. The goal of this study was to determine whether the automated system is capable of performing as good as human experts for the task of identifying HONcode principles on health websites. Using manual evaluation by HONcode senior experts as a baseline, this study compared the capability of the automated HONcode detection system to that of the HONcode senior experts. A set of 27 health-related websites were manually assessed for compliance to each of the 8 HONcode principles by senior HONcode experts. The same set of websites were processed by the automated system for HONcode compliance detection based on supervised machine learning. The results obtained by these two methods were then compared. For the privacy criterion, the automated system obtained the same results as the human expert for 17 of 27 sites (14 true positives and 3 true negatives) without noise (0 false positives). The remaining 10 false negative instances for the privacy criterion represented tolerable behavior because it is important that all automatically detected principle conformities are accurate (ie, specificity [100%] is preferred over sensitivity [58%] for the privacy criterion). In addition, the automated system had precision of at least 75%, with a recall of more than 50% for contact details (100% precision, 69% recall), authority (85% precision, 52% recall), and reference (75% precision, 56% recall). The results also revealed issues for some criteria such as date. Changing the "document" definition (ie, using the sentence instead of whole document as a unit of classification) within the automated system resolved some but not all of them. Study results indicate concordance between automated and expert manual compliance detection for authority, privacy, reference, and contact details. Results also indicate that using the same general parameters for automated detection of each criterion produces suboptimal results. Future work to configure optimal system parameters for each HONcode principle would improve results. The potential utility of integrating automated detection of HONcode conformity into future search engines is also discussed.
Automated Detection of HONcode Website Conformity Compared to Manual Detection: An Evaluation
2015-01-01
Background To earn HONcode certification, a website must conform to the 8 principles of the HONcode of Conduct In the current manual process of certification, a HONcode expert assesses the candidate website using precise guidelines for each principle. In the scope of the European project KHRESMOI, the Health on the Net (HON) Foundation has developed an automated system to assist in detecting a website’s HONcode conformity. Automated assistance in conducting HONcode reviews can expedite the current time-consuming tasks of HONcode certification and ongoing surveillance. Additionally, an automated tool used as a plugin to a general search engine might help to detect health websites that respect HONcode principles but have not yet been certified. Objective The goal of this study was to determine whether the automated system is capable of performing as good as human experts for the task of identifying HONcode principles on health websites. Methods Using manual evaluation by HONcode senior experts as a baseline, this study compared the capability of the automated HONcode detection system to that of the HONcode senior experts. A set of 27 health-related websites were manually assessed for compliance to each of the 8 HONcode principles by senior HONcode experts. The same set of websites were processed by the automated system for HONcode compliance detection based on supervised machine learning. The results obtained by these two methods were then compared. Results For the privacy criterion, the automated system obtained the same results as the human expert for 17 of 27 sites (14 true positives and 3 true negatives) without noise (0 false positives). The remaining 10 false negative instances for the privacy criterion represented tolerable behavior because it is important that all automatically detected principle conformities are accurate (ie, specificity [100%] is preferred over sensitivity [58%] for the privacy criterion). In addition, the automated system had precision of at least 75%, with a recall of more than 50% for contact details (100% precision, 69% recall), authority (85% precision, 52% recall), and reference (75% precision, 56% recall). The results also revealed issues for some criteria such as date. Changing the “document” definition (ie, using the sentence instead of whole document as a unit of classification) within the automated system resolved some but not all of them. Conclusions Study results indicate concordance between automated and expert manual compliance detection for authority, privacy, reference, and contact details. Results also indicate that using the same general parameters for automated detection of each criterion produces suboptimal results. Future work to configure optimal system parameters for each HONcode principle would improve results. The potential utility of integrating automated detection of HONcode conformity into future search engines is also discussed. PMID:26036669
NASA Astrophysics Data System (ADS)
Patel, M. N.; Young, K.; Halling-Brown, M. D.
2018-03-01
The demand for medical images for research is ever increasing owing to the rapid rise in novel machine learning approaches for early detection and diagnosis. The OPTIMAM Medical Image Database (OMI-DB)1,2 was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, annotations and expert-determined ground truths. Since the inception of the database in early 2011, the volume of images and associated data collected has dramatically increased owing to automation of the collection pipeline and inclusion of new sites. Currently, these data are stored at each respective collection site and synced periodically to a central store. This leads to a large data footprint at each site, requiring large physical onsite storage, which is expensive. Here, we propose an update to the OMI-DB collection system, whereby the storage of all the data is automatically transferred to the cloud on collection. This change in the data collection paradigm reduces the reliance of physical servers at each site; allows greater scope for future expansion; and removes the need for dedicated backups and improves security. Moreover, with the number of applications to access the data increasing rapidly with the maturity of the dataset cloud technology facilities faster sharing of data and better auditing of data access. Such updates, although may sound trivial; require substantial modification to the existing pipeline to ensure data integrity and security compliance. Here, we describe the extensions to the OMI-DB collection pipeline and discuss the relative merits of the new system.
Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces
NASA Astrophysics Data System (ADS)
Theiler, P. W.; Schindler, K.
2012-07-01
Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.
Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Suárez-Albela, Manuel; Díaz-Bouza, Manuel A
2018-06-17
Pipes are one of the key elements in the construction of ships, which usually contain between 15,000 and 40,000 of them. This huge number, as well as the variety of processes that may be performed on a pipe, require rigorous identification, quality assessment and traceability. Traditionally, such tasks have been carried out by using manual procedures and following documentation on paper, which slows down the production processes and reduces the output of a pipe workshop. This article presents a system that allows for identifying and tracking the pipes of a ship through their construction cycle. For such a purpose, a fog computing architecture is proposed to extend cloud computing to the edge of the shipyard network. The system has been developed jointly by Navantia, one of the largest shipbuilders in the world, and the University of A Coruña (Spain), through a project that makes use of some of the latest Industry 4.0 technologies. Specifically, a Cyber-Physical System (CPS) is described, which uses active Radio Frequency Identification (RFID) tags to track pipes and detect relevant events. Furthermore, the CPS has been integrated and tested in conjunction with Siemens’ Manufacturing Execution System (MES) (Simatic IT). The experiments performed on the CPS show that, in the selected real-world scenarios, fog gateways respond faster than the tested cloud server, being such gateways are also able to process successfully more samples under high-load situations. In addition, under regular loads, fog gateways react between five and 481 times faster than the alternative cloud approach.
NASA Astrophysics Data System (ADS)
Shaw, J. A.; Nugent, P. W.
2016-12-01
Ground-based longwave-infrared (LWIR) cloud imaging can provide continuous cloud measurements in the Arctic. This is of particular importance during the Arctic winter when visible wavelength cloud imaging systems cannot operate. This method uses a thermal infrared camera to observe clouds and produce measurements of cloud amount and cloud optical depth. The Montana State University Optical Remote Sensor Laboratory deployed an infrared cloud imager (ICI) at the Atmospheric Radiation Monitoring North Slope of Alaska site at Barrow, AK from July 2012 through July 2014. This study was used to both understand the long-term operation of an ICI in the Arctic and to study the consistency of the ICI data products in relation to co-located active and passive sensors. The ICI was found to have a high correlation (> 0.92) with collocated cloud instruments and to produce an unbiased data product. However, the ICI also detects thin clouds that are not detected by most operational cloud sensors. Comparisons with high-sensitivity actively sensed cloud products confirm the existence of these thin clouds. Infrared cloud imaging systems can serve a critical role in developing our understanding of cloud cover in the Arctic by provided a continuous annual measurement of clouds at sites of interest.
Large and Small Magellanic Clouds age-metallicity relationships
NASA Astrophysics Data System (ADS)
Perren, G. I.; Piatti, A. E.; Vázquez, R. A.
2017-10-01
We present a new determination of the age-metallicity relation for both Magellanic Clouds, estimated through the homogeneous analysis of 239 observed star clusters. All clusters in our set were observed with the filters of the Washington photometric system. The Automated Stellar cluster Analysis package (ASteCA) was employed to derive the cluster's fundamental parameters, in particular their ages and metallicities, through an unassisted process. We find that our age-metallicity relations (AMRs) can not be fully matched to any of the estimations found in twelve previous works, and are better explained by a combination of several of them in different age intervals.
NASA Wrangler: Automated Cloud-Based Data Assembly in the RECOVER Wildfire Decision Support System
NASA Technical Reports Server (NTRS)
Schnase, John; Carroll, Mark; Gill, Roger; Wooten, Margaret; Weber, Keith; Blair, Kindra; May, Jeffrey; Toombs, William
2017-01-01
NASA Wrangler is a loosely-coupled, event driven, highly parallel data aggregation service designed to take advantageof the elastic resource capabilities of cloud computing. Wrangler automatically collects Earth observational data, climate model outputs, derived remote sensing data products, and historic biophysical data for pre-, active-, and post-wildfire decision making. It is a core service of the RECOVER decision support system, which is providing rapid-response GIS analytic capabilities to state and local government agencies. Wrangler reduces to minutes the time needed to assemble and deliver crucial wildfire-related data.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Green Bank Telescope Detection of HI Clouds in the Fermi Bubble Wind
NASA Astrophysics Data System (ADS)
Lockman, Felix; Di Teodoro, Enrico M.; McClure-Griffiths, Naomi M.
2018-01-01
We used the Robert C. Byrd Green Bank Telescope to map HI 21cm emission in two large regions around the Galactic Center in a search for HI clouds that might be entrained in the nuclear wind that created the Fermi bubbles. In a ~160 square degree region at |b|>4 deg. and |long|<10 deg we detect 106 HI clouds that have large non-circular velocities consistent with their acceleration by the nuclear wind. Rapidly moving clouds are found as far as 1.5 kpc from the center; there are no detectable asymmetries in the cloud populations above and below the Galactic Center. The cloud kinematics is modeled as a population with an outflow velocity of 330 km/s that fills a cone with an opening angle ~140 degrees. The total mass in the clouds is ~10^6 solar masses and we estimate cloud lifetimes to be between 2 and 8 Myr, implying a cold gas mass-loss rate of about 0.1 solar masses per year into the nuclear wind.The Green Bank Telescope is a facility of the National Science Foundation, operated under a cooperative agreement by Associated Universities, Inc.
3D cloud detection and tracking system for solar forecast using multiple sky imagers
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2015-06-23
We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less
Using sky radiances measured by ground based AERONET Sun-Radiometers for cirrus cloud detection
NASA Astrophysics Data System (ADS)
Sinyuk, A.; Holben, B. N.; Eck, T. F.; Slutsker, I.; Lewis, J. R.
2013-12-01
Screening of cirrus clouds using observations of optical depth (OD) only has proven to be a difficult task due mostly to some clouds having temporally and spatially stable OD. On the other hand, the sky radiances measurements which in AERONET protocol are taken throughout the day may contain additional cloud information. In this work the potential of using sky radiances for cirrus cloud detection is investigated. The detection is based on differences in the angular shape of sky radiances due to cirrus clouds and aerosol (see Figure). The range of scattering angles from 3 to 6 degrees was selected due to two primary reasons: high sensitivity to cirrus clouds presence, and close proximity to the Sun. The angular shape of sky radiances was parametrized by its curvature, which is a parameter defined as a combination of the first and second derivatives as a function of scattering angle. We demonstrate that a slope of the logarithm of curvature versus logarithm of scattering angle in this selected range of scattering angles is sensitive to cirrus cloud presence. We also demonstrate that restricting the values of the slope below some threshold value can be used for cirrus cloud screening. The threshold value of the slope was estimated using collocated measurements of AERONET data and MPLNET lidars.
Understanding reliance on automation: effects of error type, error distribution, age and experience
Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka
2015-01-01
An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142
Understanding reliance on automation: effects of error type, error distribution, age and experience.
Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka
2014-03-01
An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.
Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) Cost-Benefit Analysis
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2008-01-01
Lightning Launch Commit Criteria (LLCC) are designed to prevent space launch vehicles from flight through environments conducive to natural or triggered lightning and are used for all U.S. government and commercial launches at government and civilian ranges. They are maintained by a committee known as the NASA/USAF Lightning Advisory Panel (LAP). The previous LLCC for anvil cloud, meant to avoid triggered lightning, have been shown to be overly restrictive. Some of these rules have had such high safety margins that they prohibited flight under conditions that are now thought to be safe 90% of the time, leading to costly launch delays and scrubs. The LLCC for anvil clouds was upgraded in the summer of 2005 to incorporate results from the Airborne Field Mill (ABFM) experiment at the Eastern Range (ER). Numerous combinations of parameters were considered to develop the best correlation of operational weather observations to in-cloud electric fields capable of rocket triggered lightning in anvil clouds. The Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) was the best metric found. Dr. Harry Koons of Aerospace Corporation conducted a risk analysis of the VAHIRR product. The results indicated that the LLCC based on the VAHIRR product would pose a negligible risk of flying through hazardous electric fields. Based on these findings, the Kennedy Space Center Weather Office is considering seeking funding for development of an automated VAHIRR algorithm for the new ER 45th Weather Squadron (45 WS) RadTec 431250 weather radar and Weather Surveillance Radar-1988 Doppler (WSR-88D) radars. Before developing an automated algorithm, the Applied Meteorology Unit (AMU) was tasked to determine the frequency with which VAHIRR would have allowed a launch to safely proceed during weather conditions otherwise deemed "red" by the Launch Weather Officer. To do this, the AMU manually calculated VAHIRR values based on candidate cases from past launches with known anvil cloud LLCC violations. An automated algorithm may be developed if the analyses from past launches show VAHIRR would have provided a significant cost benefit by allowing a launch to proceed. The 45 WS at the ER and 30th Weather Squadron (30 WS) at the Western Range provided the AMU with launch weather summaries from past launches that were impacted by LLCC. The 45 WS provided summaries from 14 launch attempts and the 30 WS fkom 5. The launch attempts occurred between December 2001 and June 2007. These summaries helped the AMU determine when the LLCC were "red" due to anvil cloud. The AMU collected WSR-88D radar reflectivity, cloud-to-ground lightning strikes, soundings and satellite imagery. The AMU used step-by-step instructions for calculating VAHIRR manually as provided by the 45 WS. These instructions were used for all of the candidate cases when anvil cloud caused an LLCC violation identified in the launch weather summaries. The AMU evaluated several software programs capable of visualizing radar data so that VAHIRR could be calculated and chose GR2Analyst from Gibson Ridge Software, LLC. Data availability and lack of detail from some launch weather summaries permitted analysis of six launch attempts from the ER and none from the WR. The AMU did not take into account whether or not other weather LCC violations were occurring at the same time as the anvil cloud LLCC since the goal of this task was to determine how often VAHIRR provided relief to the anvil cloud LLCC at any time during several previous launch attempts. Therefore, in the statistics presented in this report, it is possible that even though VAHIRR provided relief to the anvil cloud LLCC, other weather LCC could have been violated not permitting the launch to proceed. The results of this cost-benefit analysis indicated VAHIRR provided relief from the anvil cloud LLCC between about 15% and 18% of the time for varying 5-minute time periods based on summaries fkom six launch attempts and would have allowed launch to proceed that were otherwise "NO GO" due to the anvil cloud LLCC if the T-0 time occurred during the anvil cloud LLCC violations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Zhe; McFarlane, Sally A.; Schumacher, Courtney
2014-05-16
To improve understanding of the convective processes key to the Madden-Julian-Oscillation (MJO) initiation, the Dynamics of the MJO (DYNAMO) and Atmospheric Radiation Measurement MJO Investigation Experiment (AMIE) collected four months of observations from three radars, the S-band Polarization Radar (S-Pol), the C-band Shared Mobile Atmospheric Research & Teaching Radar (SMART-R), and Ka-band Zenith Radar (KAZR) on Addu Atoll in the tropical Indian Ocean. This study compares the measurements from the S-Pol and SMART-R to those from the more sensitive KAZR in order to characterize the hydrometeor detection capabilities of the two scanning precipitation radars. Frequency comparisons for precipitating convective cloudsmore » and non-precipitating high clouds agree much better than non-precipitating low clouds for both scanning radars due to issues in ground clutter. On average, SMART-R underestimates convective and high cloud tops by 0.3 to 1.1 km, while S-Pol underestimates cloud tops by less than 0.4 km for these cloud types. S-Pol shows excellent dynamic range in detecting various types of clouds and therefore its data are well suited for characterizing the evolution of the 3D cloud structures, complementing the profiling KAZR measurements. For detecting non-precipitating low clouds and thin cirrus clouds, KAZR remains the most reliable instrument. However, KAZR is attenuated in heavy precipitation and underestimates cloud top height due to rainfall attenuation 4.3% of the time during DYNAMO/AMIE. An empirical method to correct the KAZR cloud top heights is described, and a merged radar dataset is produced to provide improved cloud boundary estimates, microphysics and radiative heating retrievals.« less
NASA Astrophysics Data System (ADS)
Bley, S.; Deneke, H.
2013-10-01
A threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the Meteosat SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low-resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures that cannot be detected by the low-resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behavior for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test data set depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as an estimate of cloud fraction. The HRV cloud mask aims for small-scale convective sub-pixel clouds that are missed by the EUMETSAT cloud mask. The major limit of the HRV cloud mask is the minimum cloud optical thickness (COT) that can be detected. This threshold COT was found to be about 0.8 over ocean and 2 over land and is highly related to the albedo of the underlying surface.
Detection and Retrieval of Multi-Layered Cloud Properties Using Satellite Data
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jian-Ping; Nguyen, Louis; Khaiyer, Mandana M.
2005-01-01
Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.
Detection and retrieval of multi-layered cloud properties using satellite data
NASA Astrophysics Data System (ADS)
Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jianping; Nguyen, Louis; Khaiyer, Mandana M.
2005-10-01
Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.
Load-Differential Features for Automated Detection of Fatigue Cracks Using Guided Waves (Preprint)
2011-11-01
AFRL-RX-WP-TP-2011-4363 LOAD-DIFFERENTIAL FEATURES FOR AUTOMATED DETECTION OF FATIGUE CRACKS USING GUIDED WAVES (PREPRINT) Jennifer E...AUTOMATED DETECTION OF FATIGUE CRACKS USING GUIDED WAVES (PREPRINT) 5a. CONTRACT NUMBER FA8650-09-C-5206 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...tensile loads open fatigue cracks and thus enhance their detectability using ultrasonic methods. Here we introduce a class of load-differential methods
Measurements of 12C ions beam fragmentation at large angle with an Emulsion Cloud Chamber
NASA Astrophysics Data System (ADS)
Alexandrov, A.; De Lellis, G.; Di Crescenzo, A.; Lauria, A.; Montesi, M. C.; Pastore, A.; Patera, V.; Sarti, A.; Tioukov, V.
2017-08-01
Hadron radiotherapy is a powerful technique for the treatment of deep-seated tumours. The physical dose distribution of hadron beams is characterized by a small dose delivered in the entrance channel and a large dose in the Bragg peak area. Fragmentation of the incident particles and struck nuclei occurs along the hadron path. Knowledge of the fragment energies and angular distributions is crucial for the validation of the models used in treatment planning systems. We report on large angle fragmentation measurements of a 400 MeV/n 12C beam impinging on a composite target at the GSI laboratory in Germany. The detector was made of 300 micron thick nuclear emulsion films, with sub-micrometric spatial resolution and large angle track detection capability, interleaved with passive material. Thanks to newly developed techniques in the automated scanning of emulsions it was possible to extend the angular range of detected particles. This resulted in the first measurement of the angular and momentum spectrum for fragments emitted in the range from 34o to 81o.
Automated Detection of Thermo-Erosion in High Latitude Ecosystems
NASA Astrophysics Data System (ADS)
Lara, M. J.; Chipman, M. L.; Hu, F.
2017-12-01
Detecting permafrost disturbance is of critical importance as the severity of climate change and associated increase in wildfire frequency and magnitude impacts regional to global carbon dynamics. However, it has not been possible to evaluate spatiotemporal patterns of permafrost degradation over large regions of the Arctic, due to limited spatial and temporal coverage of high resolution optical, radar, lidar, or hyperspectral remote sensing products. Here we present the first automated multi-temporal analysis for detecting disturbance in response to permafrost thaw, using meso-scale high-frequency remote sensing products (i.e. entire Landsat image archive). This approach was developed, tested, and applied in the Noatak National Preserve (26,500km2) in northwestern Alaska. We identified thermo-erosion (TE), by capturing the indirect spectral signal associated with episodic sediment plumes in adjacent waterbodies following TE disturbance. We isolated this turbidity signal within lakes during summer (mid-summer & late-summer) and annual time-period image composites (1986-2016), using the cloud-based geospatial parallel processing platform, Google Earth Engine™API. We validated the TE detection algorithm using seven consecutive years of sub-meter high resolution imagery (2009-2015) covering 798 ( 33%) of the 2456 total lakes in the Noatak lowlands. Our approach had "good agreement" with sediment pulses and landscape deformation in response to permafrost thaw (overall accuracy and kappa coefficient of 85% and 0.61). We identify active TE to impact 10.4% of all lakes, but was inter-annually variable, with the highest and lowest TE years represented by 1986 ( 41.1%) and 2002 ( 0.7%), respectively. We estimate thaw slumps, lake erosion, lake drainage, and gully formation to account for 23.3, 61.8, 12.5, and 1.3%, of all active TE across the Noatak National Preserve. Preliminary analysis, suggests TE may be subject to a hysteresis effect following extreme climatic conditions or wildfire. This work demonstrates the utility of meso-scale high frequency remote sensing products for advancing high latitude permafrost research.
NASA Astrophysics Data System (ADS)
de Laat, Adrianus; Defer, Eric; Delanoë, Julien; Dezitter, Fabien; Gounou, Amanda; Grandin, Alice; Guignard, Anthony; Fokke Meirink, Jan; Moisselin, Jean-Marc; Parol, Frédéric
2017-04-01
We present an evaluation of the ability of passive broadband geostationary satellite measurements to detect high ice water content (IWC > 1 g m-3) as part of the European High Altitude Ice Crystals (HAIC) project for detection of upper-atmospheric high IWC, which can be a hazard for aviation. We developed a high IWC mask based on measurements of cloud properties using the Cloud Physical Properties (CPP) algorithm applied to the geostationary Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Evaluation of the high IWC mask with satellite measurements of active remote sensors of cloud properties (CLOUDSAT/CALIPSO combined in the DARDAR (raDAR-liDAR) product) reveals that the high IWC mask is capable of detecting high IWC values > 1 g m-3 in the DARDAR profiles with a probability of detection of 60-80 %. The best CPP predictors of high IWC were the condensed water path, cloud optical thickness, cloud phase, and cloud top height. The evaluation of the high IWC mask against DARDAR provided indications that the MSG-CPP high IWC mask is more sensitive to cloud ice or cloud water in the upper part of the cloud, which is relevant for aviation purposes. Biases in the CPP results were also identified, in particular a solar zenith angle (SZA) dependence that reduces the performance of the high IWC mask for SZAs > 60°. Verification statistics show that for the detection of high IWC a trade-off has to be made between better detection of high IWC scenes and more false detections, i.e., scenes identified by the high IWC mask that do not contain IWC > 1 g m-3. However, the large majority of these detections still contain IWC values between 0.1 and 1 g m-3. Comparison of the high IWC mask against results from the Rapidly Developing Thunderstorm (RDT) algorithm applied to the same geostationary SEVIRI data showed that there are similarities and differences with the high IWC mask: the RDT algorithm is very capable of detecting young/new convective cells and areas, whereas the high IWC mask appears to be better capable of detecting more mature and ageing convection as well as cirrus remnants. The lack of detailed understanding of what causes aviation hazards related to high IWC, as well as the lack of clearly defined user requirements, hampers further tuning of the high IWC mask. Future evaluation of the high IWC mask against field campaign data, as well as obtaining user feedback and user requirements from the aviation industry, should provide more information on the performance of the MSG-CPP high IWC mask and contribute to improving the practical use of the high IWC mask.
Global Measurements of Optically Thin Ice Clouds Using CALIOP
NASA Technical Reports Server (NTRS)
Ryan, R.; Avery, M.; Tackett, J.
2017-01-01
Optically thin ice clouds have been shown to have a net warming effect on the globe but, because passive instruments are not sensitive to optically thin clouds, the occurrence frequency of this class of clouds is greatly underestimated in historical passive sensor cloud climatology. One major strength of CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization), onboard the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) spacecraft, is its ability to detect these thin clouds, thus filling an important missing piece in the historical data record. This poster examines the full mission of CALIPSO Level 2 data, focusing on those CALIOP retrievals identified as thin ice clouds according to the definition shown to the right. Using this definition, thin ice clouds are identified and counted globally and vertically for each season. By examining the spatial and seasonal distributions of these thin clouds we hope to gain a better understanding these thin ice clouds and how their global distribution has changed over the mission. This poster showcases when and where CALIOP detects thin ice clouds and examines a case study of the eastern pacific and the effects seen from the El Nino-Southern Oscillation (ENSO).
The response of the Seasat and Magsat infrared horizon scanners to cold clouds
NASA Technical Reports Server (NTRS)
Bilanow, S.; Phenneger, M.
1980-01-01
Cold clouds over the Earth are shown to be the principal cause of pitch and roll measurement noise in flight data from the infrared horizon scanners onboard Seasat and Magsat. The observed effects of clouds on the fixed threshold horizon detection logic of the Magsat scanner and on the variable threshold detection logic of the Seasat scanner are discussed. National Oceanic and Atmospheric Administration (NOAA) Earth photographs marked with the scanner ground trace clearly confirm the relationship between measurement errors and Earth clouds. A one to one correspondence can be seen between excursion in the pitch and roll data and cloud crossings. The characteristics of the cloud-induced noise are discussed, and the response of the satellite control systems to the cloud errors is described. Changes to the horizon scanner designs that would reduce the effects of clouds are noted.
CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline.
Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M; Tettelin, Hervé; White, Owen; Angiuoli, Samuel V; Mahurkar, Anup; Fricke, W Florian
2017-04-27
The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. CloVR-Comparative runs reference-free multiple whole-genome alignments to determine unique, shared and core coding sequences (CDSs) and single nucleotide polymorphisms (SNPs). Output includes short summary reports and detailed text-based results files, graphical visualizations (phylogenetic trees, circular figures), and a database file linked to the Sybil comparative genome browser. Data up- and download, pipeline configuration and monitoring, and access to Sybil are managed through CloVR-Comparative web interface. CloVR-Comparative and Sybil are distributed as part of the CloVR virtual appliance, which runs on local computers or the Amazon EC2 cloud. Representative datasets (e.g. 40 draft and complete Escherichia coli genomes) are processed in <36 h on a local desktop or at a cost of <$20 on EC2. CloVR-Comparative allows anybody with Internet access to run comparative genomics projects, while eliminating the need for on-site computational resources and expertise.
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
Cloud Impacts on Pavement Temperature in Energy Balance Models
NASA Astrophysics Data System (ADS)
Walker, C. L.
2013-12-01
Forecast systems provide decision support for end-users ranging from the solar energy industry to municipalities concerned with road safety. Pavement temperature is an important variable when considering vehicle response to various weather conditions. A complex, yet direct relationship exists between tire and pavement temperatures. Literature has shown that as tire temperature increases, friction decreases which affects vehicle performance. Many forecast systems suffer from inaccurate radiation forecasts resulting in part from the inability to model different types of clouds and their influence on radiation. This research focused on forecast improvement by determining how cloud type impacts the amount of shortwave radiation reaching the surface and subsequent pavement temperatures. The study region was the Great Plains where surface solar radiation data were obtained from the High Plains Regional Climate Center's Automated Weather Data Network stations. Road pavement temperature data were obtained from the Meteorological Assimilation Data Ingest System. Cloud properties and radiative transfer quantities were obtained from the Clouds and Earth's Radiant Energy System mission via Aqua and Terra Moderate Resolution Imaging Spectroradiometer satellite products. An additional cloud data set was incorporated from the Naval Research Laboratory Cloud Classification algorithm. Statistical analyses using a modified nearest neighbor approach were first performed relating shortwave radiation variability with road pavement temperature fluctuations. Then statistical associations were determined between the shortwave radiation and cloud property data sets. Preliminary results suggest that substantial pavement forecasting improvement is possible with the inclusion of cloud-specific information. Future model sensitivity testing seeks to quantify the magnitude of forecast improvement.
The first observed cloud echoes and microphysical parameter retrievals by China's 94-GHz cloud radar
NASA Astrophysics Data System (ADS)
Wu, Juxiu; Wei, Ming; Hang, Xin; Zhou, Jie; Zhang, Peichang; Li, Nan
2014-06-01
By using the cloud echoes first successfully observed by China's indigenous 94-GHz SKY cloud radar, the macrostructure and microphysical properties of drizzling stratocumulus clouds in Anhui Province on 8 June 2013 are analyzed, and the detection capability of this cloud radar is discussed. The results are as follows. (1) The cloud radar is able to observe the time-varying macroscopic and microphysical parameters of clouds, and it can reveal the microscopic structure and small-scale changes of clouds. (2) The velocity spectral width of cloud droplets is small, but the spectral width of the cloud containing both cloud droplets and drizzle is large. When the spectral width is more than 0.4 m s-1, the radar reflectivity factor is larger (over -10 dBZ). (3) The radar's sensitivity is comparatively higher because the minimum radar reflectivity factor is about -35 dBZ in this experiment, which exceeds the threshold for detecting the linear depolarized ratio (LDR) of stratocumulus (commonly -11 to -14 dBZ; decreases with increasing turbulence). (4) After distinguishing of cloud droplets from drizzle, cloud liquid water content and particle effective radius are retrieved. The liquid water content of drizzle is lower than that of cloud droplets at the same radar reflectivity factor.
Detection of nitric oxide in the dark cloud L134N
NASA Technical Reports Server (NTRS)
Mcgonagle, D.; Irvine, W. M.; Minh, Y. C.; Ziurys, L. M.
1990-01-01
The first detection of interstellar nitric oxide (NO) in a cold dark cloud, L134N is reported. Nitric oxide was observed by means of its two 2 Pi 1/2, J = 3/2 - 1/2, rotational transitions at 150.2 and 150.5 GHz, which occur because of Lambda-doubling. The inferred column density for L134N is about 5 x 10 to the 14th/sq cm toward the SO peak in that cloud. This value corresponds to a fractional abundance relative to molecular hydrogen of about 6 x 10 to the -8th and is in good agreement with predictions of quiescent cloud ion-molecule chemistry. NO was not detected toward the dark cloud TMC-1 at an upper limit of 3 x 10 to the -8th or less.
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
A graphic user interface for efficient 3D photo-reconstruction based on free software
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Gómez, Jose A.
2015-04-01
Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.
Detection and monitoring of H2O and CO2 ice clouds on Mars
Bell, J.F.; Calvin, W.M.; Ockert-Bell, M. E.; Crisp, D.; Pollack, James B.; Spencer, J.
1996-01-01
We have developed an observational scheme for the detection and discrimination of Mars atmospheric H2O and CO2 clouds using ground-based instruments in the near infrared. We report the results of our cloud detection and characterization study using Mars near IR images obtained during the 1990 and 1993 oppositions. We focused on specific wavelengths that have the potential, based on previous laboratory studies of H2O and CO2 ices, of yielding the greatest degree of cloud detectability and compositional discriminability. We have detected and mapped absorption features at some of these wavelengths in both the northern and southern polar regions of Mars. Compositional information on the nature of these absorption features was derived from comparisons with laboratory ice spectra and with a simplified radiative transfer model of a CO2 ice cloud overlying a bright surface. Our results indicate that both H2O and CO2 ices can be detected and distinguished in the polar hood clouds. The region near 3.00 ??m is most useful for the detection of water ice clouds because there is a strong H2O ice absorption at this wavelength but only a weak CO2 ice band. The region near 3.33 ??m is most useful for the detection of CO2 ice clouds because there is a strong, relatively narrow CO2 ice band at this wavelength but only broad "continuum" H2O ice absorption. Weaker features near 2.30 ??m could arise from CO2 ice at coarse grain sizes, or surface/dust minerals. Narrow features near 2.00 ??m, which could potentially be very diagnostic of CO2 ice clouds, suffer from contamination by Mars atmospheric CO2 absorptions and are difficult to interpret because of the rather poor knowledge of surface elevation at high latitudes. These results indicate that future ground-based, Earth-orbital, and spacecraft studies over a more extended span of the seasonal cycle should yield substantial information on the style and timing of volatile transport on Mars, as well as a more detailed understanding of the role of CO2 condensation in the polar heat budget. Copyright 1996 by the American Geophysical Union.
Near-Real-Time Detection and Monitoring of Intense Pyroconvection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Peterson, D. A.; Fromm, M. D.; Hyer, E. J.; Surratt, M. L.; Solbrig, J. E.; Campbell, J. R.
2016-12-01
Intense fire-triggered thunderstorms, known as pyrocumulonimbus (or pyroCb), can alter fire behavior, influence smoke plume trajectories, and hinder fire suppression efforts. PyroCb are also known for injecting a significant quantity of aerosol mass into the upper-troposphere and lower-stratosphere (UTLS). Near-real-time (NRT) detection and monitoring of pyroCb is highly desirable for a variety of forecasting and research applications. The Naval Research Laboratory (NRL) recently developed the first automated NRT pyroCb detection algorithm for geostationary satellite sensors. The algorithm uses multispectral infrared observations to isolate deep convective clouds with the distinct microphysical signal of pyroCb. Application of this algorithm to 88 intense wildfires observed during the 2013 fire season in western North America resulted in detection of individual intense events, pyroCb embedded within traditional convection, and multiple, short-lived pulses of activity. Comparisons with a community inventory indicate that this algorithm captures the majority of pyroCb. The primary limitation of the current system is that pyroCb anvils can be small relative to satellite pixel size, especially in in regions with large viewing angles. The algorithm is also sensitive to some false positives from traditional convection that either ingests smoke or exhibits extreme updraft velocities. This algorithm has been automated using the GeoIPS processing system developed at NRL, which produces a variety of imagery products and statistical output for rapid analysis of potential pyroCb events. NRT application of this algorithm has been extended to the majority of regions worldwide known to have a high frequency of pyroCb occurrence. This involves a constellation comprised of GOES-East, GOES-West, and Himawari-8. Imagery is posted immediately to an NRL-maintained web page. Alerts are generated by the system and disseminated via email. This detection system also has potential to serve as a data source for other NRT environmental monitoring systems. While the current geostationary constellation has several important limitations, the next-generation of geostationary sensors will offer significant advantages for achieving the goal of global NRT pyroCb detection.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory
NASA Astrophysics Data System (ADS)
Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro
2016-04-01
Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.
Comparasion of Cloud Cover restituted by POLDER and MODIS
NASA Astrophysics Data System (ADS)
Zeng, S.; Parol, F.; Riedi, J.; Cornet, C.; Thieuxleux, F.
2009-04-01
PARASOL and AQUA are two sun-synchronous orbit satellites in the queue of A-Train satellites that observe our earth within a few minutes apart from each other. Aboard these two platforms, POLDER and MODIS provide coincident observations of the cloud cover with very different characteristics. These give us a good opportunity to study the clouds system and evaluate strengths and weaknesses of each dataset in order to provide an accurate representation of global cloud cover properties. This description is indeed of outermost importance to quantify and understand the effect of clouds on global radiation budget of the earth-atmosphere system and their influence on the climate changes. We have developed a joint dataset containing both POLDER and MODIS level 2 cloud products collocated and reprojected on a common sinusoidal grid in order to make the data comparison feasible and veracious. Our foremost work focuses on the comparison of both spatial distribution and temporal variation of the global cloud cover. This simple yet critical cloud parameter need to be clearly understood to allow further comparison of the other cloud parameters. From our study, we demonstrate that on average these two sensors both detect the clouds fairly well. They provide similar spatial distributions and temporal variations:both sensors see high values of cloud amount associated with deep convection in ITCZ, over Indonesia, and in west-central Pacific Ocean warm pool region; they also provide similar high cloud cover associated to mid-latitude storm tracks, to Indian monsoon or to the stratocumulus along the west coast of continents; on the other hand small cloud amounts that typically present over subtropical oceans and deserts in subsidence aeras are well identified by both POLDER and MODIS. Each sensor has its advantages and inconveniences for the detection of a particular cloud types. With higher spatial resolution, MODIS can better detect the fractional clouds thus explaining as one part of a positive bias in any latitude and in any viewing angle with an order of 10% between the POLDER cloud amount and the so-called MODIS "combined" cloud amount. Nevertheless it is worthy to note that a negative bias of about 10% is obtained between the POLDER cloud amount and the MODIS "day-mean" cloud amount. Main differences between the two MODIS cloud amount values are known to be due to the filtering of remaining aerosols or cloud edges. due to both this high spatial resolution of MODIS and the fact that "combined" cloud amount filters cloud edges, we can also explain why appear the high positive bias regions over subtropical ocean in south hemisphere and over east Africa in summer. Thanks to several channels in the thermal infrared spectral domain, MODIS detects probably much better the thin cirrus especially over land, thus causing a general negative bias for ice clouds. The multi-spectral capability of MODIS also allows for a better detection of low clouds over snow or ice, Hence the (POLDER-MODIS) cloud amount difference is often negative over Greenland, Antarctica, and over the continents at middle-high latitudes in spring and autumn associated to the snow coverage. The multi-spectral capability of MODIS also makes the discrimination possible between the biomass burning aerosols and the fractional clouds over the continents. Thus a positive bias appears in central Africa in summer and autumn associated to important biomass burning events. Over transition region between desert and non-desert, the presence of large negative bias (POLDER-MODIS) of cloud amount maybe partly due to MODIS pixel falsely labeled the desert as cloudy, where MODIS algorithm uses static desert mask. This is clearly highlighted in south of Sahara in spring and summer where we find a bias negative with an order of -0.1. What is more, thanks to its multi-angular capability, POLDER can discriminate the sun-glint region thus minimizing the dependence of cloud amount on view angle. It makes the detection of high clouds easier over a black surface thanks to its polarization character.
NASA Astrophysics Data System (ADS)
Van Beusekom, Ashley E.; González, Grizelle; Scholl, Martha A.
2017-06-01
The degree to which cloud immersion provides water in addition to rainfall, suppresses transpiration, and sustains tropical montane cloud forests (TMCFs) during rainless periods is not well understood. Climate and land use changes represent a threat to these forests if cloud base altitude rises as a result of regional warming or deforestation. To establish a baseline for quantifying future changes in cloud base, we installed a ceilometer at 100 m altitude in the forest upwind of the TMCF that occupies an altitude range from ˜ 600 m to the peaks at 1100 m in the Luquillo Mountains of eastern Puerto Rico. Airport Automated Surface Observing System (ASOS) ceilometer data, radiosonde data, and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite data were obtained to investigate seasonal cloud base dynamics, altitude of the trade-wind inversion (TWI), and typical cloud thickness for the surrounding Caribbean region. Cloud base is rarely quantified near mountains, so these results represent a first look at seasonal and diurnal cloud base dynamics for the TMCF. From May 2013 to August 2016, cloud base was lowest during the midsummer dry season, and cloud bases were lower than the mountaintops as often in the winter dry season as in the wet seasons. The lowest cloud bases most frequently occurred at higher elevation than 600 m, from 740 to 964 m. The Luquillo forest low cloud base altitudes were higher than six other sites in the Caribbean by ˜ 200-600 m, highlighting the importance of site selection to measure topographic influence on cloud height. Proximity to the oceanic cloud system where shallow cumulus clouds are seasonally invariant in altitude and cover, along with local trade-wind orographic lifting and cloud formation, may explain the dry season low clouds. The results indicate that climate change threats to low-elevation TMCFs are not limited to the dry season; changes in synoptic-scale weather patterns that increase frequency of drought periods during the wet seasons (periods of higher cloud base) may also impact ecosystem health.
Van Beusekom, Ashley E.; González, Grizelle; Scholl, Martha A.
2017-01-01
The degree to which cloud immersion provides water in addition to rainfall, suppresses transpiration, and sustains tropical montane cloud forests (TMCFs) during rainless periods is not well understood. Climate and land use changes represent a threat to these forests if cloud base altitude rises as a result of regional warming or deforestation. To establish a baseline for quantifying future changes in cloud base, we installed a ceilometer at 100 m altitude in the forest upwind of the TMCF that occupies an altitude range from ∼ 600 m to the peaks at 1100 m in the Luquillo Mountains of eastern Puerto Rico. Airport Automated Surface Observing System (ASOS) ceilometer data, radiosonde data, and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite data were obtained to investigate seasonal cloud base dynamics, altitude of the trade-wind inversion (TWI), and typical cloud thickness for the surrounding Caribbean region. Cloud base is rarely quantified near mountains, so these results represent a first look at seasonal and diurnal cloud base dynamics for the TMCF. From May 2013 to August 2016, cloud base was lowest during the midsummer dry season, and cloud bases were lower than the mountaintops as often in the winter dry season as in the wet seasons. The lowest cloud bases most frequently occurred at higher elevation than 600 m, from 740 to 964 m. The Luquillo forest low cloud base altitudes were higher than six other sites in the Caribbean by ∼ 200–600 m, highlighting the importance of site selection to measure topographic influence on cloud height. Proximity to the oceanic cloud system where shallow cumulus clouds are seasonally invariant in altitude and cover, along with local trade-wind orographic lifting and cloud formation, may explain the dry season low clouds. The results indicate that climate change threats to low-elevation TMCFs are not limited to the dry season; changes in synoptic-scale weather patterns that increase frequency of drought periods during the wet seasons (periods of higher cloud base) may also impact ecosystem health.
NASA Technical Reports Server (NTRS)
Spinhime, J. D.; Palm, S. P.; Hlavka, D. L.; Hart, W. D.; Mahesh, A.
2004-01-01
The Geoscience Laser Altimeter System (GLAS) began full on orbit operations in September 2003. A main application of the two-wavelength GLAS lidar is highly accurate detection and profiling of global cloud cover. Initial analysis indicates that cloud and aerosol layers are consistently detected on a global basis to cross-sections down to 10(exp -6) per meter. Images of the lidar data dramatically and accurately show the vertical structure of cloud and aerosol to the limit of signal attenuation. The GLAS lidar has made the most accurate measurement of global cloud coverage and height to date. In addition to the calibrated lidar signal, GLAS data products include multi level boundaries and optical depth of all transmissive layers. Processing includes a multi-variable separation of cloud and aerosol layers. An initial application of the data results is to compare monthly cloud means from several months of GLAS observations in 2003 to existing cloud climatologies from other satellite measurement. In some cases direct comparison to passive cloud retrievals is possible. A limitation of the lidar measurements is nadir only sampling. However monthly means exhibit reasonably good global statistics and coverage results, at other than polar regions, compare well with other measurements but show significant differences in height distribution. For polar regions where passive cloud retrievals are problematic and where orbit track density is greatest, the GLAS results are particularly an advance in cloud cover information. Direct comparison to MODIS retrievals show a better than 90% agreement in cloud detection for daytime, but less than 60% at night. Height retrievals are in much less agreement. GLAS is a part of the NASA EOS project and data products are thus openly available to the science community (see http://glo.gsfc.nasa.gov).
Globally scalable generation of high-resolution land cover from multispectral imagery
NASA Astrophysics Data System (ADS)
Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.
2017-05-01
We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).
Gamma/Hadron Separation for the HAWC Observatory
NASA Astrophysics Data System (ADS)
Gerhardt, Michael J.
The High-Altitude Water Cherenkov (HAWC) Observatory is a gamma-ray observatory sensitive to gamma rays from 100 GeV to 100 TeV with an instantaneous field of view of ˜2 sr. It is located on the Sierra Negra plateau in Mexico at an elevation of 4,100 m and began full operation in March 2015. The purpose of the detector is to study relativistic particles that are produced by interstellar and intergalactic objects such as: pulsars, supernova remnants, molecular clouds, black holes and more. To achieve optimal angular resolution, energy reconstruction and cosmic ray background suppression for the extensive air showers detected by HAWC, good timing and charge calibration are crucial, as well as optimization of quality cuts on background suppression variables. Additions to the HAWC timing calibration, in particular automating the calibration quality checks and a new method for background suppression using a multivariate analysis are presented in this thesis.
Adaptive remote sensing technology for feature recognition and tracking
NASA Technical Reports Server (NTRS)
Wilson, R. G.; Sivertson, W. E., Jr.; Bullock, G. F.
1979-01-01
A technology development plan designed to reduce the data load and data-management problems associated with global study and monitoring missions is described with a heavy emphasis placed on developing mission capabilities to eliminate the collection of unnecessary data. Improved data selectivity can be achieved through sensor automation correlated with the real-time needs of data users. The first phase of the plan includes the Feature Identification and Location Experiment (FILE) which is scheduled for the 1980 Shuttle flight. The FILE experiment is described with attention given to technology needs, development plan, feature recognition and classification, and cloud-snow detection/discrimination. Pointing, tracking and navigation received particular consideration, and it is concluded that this technology plan is viewed as an alternative to approaches to real-time acquisition that are based on extensive onboard format and inventory processing and reliance upon global-satellite-system navigation data.
Calibrating the HISA temperature: Measuring the temperature of the Riegel-Crutcher cloud
NASA Astrophysics Data System (ADS)
Dénes, H.; McClure-Griffiths, N. M.; Dickey, J. M.; Dawson, J. R.; Murray, C. E.
2018-06-01
H I self absorption (HISA) clouds are clumps of cold neutral hydrogen (H I) visible in front of warm background gas, which makes them ideal places to study the properties of the cold atomic component of the interstellar medium (ISM). The Riegel-Crutcher (R-C) cloud is the most striking HISA feature in the Galaxy. It is one of the closest HISA clouds to us and is located in the direction of the Galactic Centre, which provides a bright background. High-resolution interferometric measurements have revealed the filamentary structure of this cloud, however it is difficult to accurately determine the temperature and the density of the gas without optical depth measurements. In this paper we present new H I absorption observations with the Australia Telescope Compact Array (ATCA) against 46 continuum sources behind the Riegel-Crutcher cloud to directly measure the optical depth of the cloud. We decompose the complex H I absorption spectra into Gaussian components using an automated machine learning algorithm. We find 300 Gaussian components, from which 67 are associated with the R-C cloud (0 < vLSR < 10 km s-1, FWHM <10 km s-1). Combining the new H I absorption data with H I emission data from previous surveys we calculate the spin temperature and find it to be between 20 and 80 K. Our measurements uncover a temperature gradient across the cloud with spin temperatures decreasing towards positive Galactic latitudes. We also find three new OH absorption lines associated with the cloud, which support the presence of molecular gas.
Towards Automated Analysis of Urban Infrastructure after Natural Disasters using Remote Sensing
NASA Astrophysics Data System (ADS)
Axel, Colin
Natural disasters, such as earthquakes and hurricanes, are an unpreventable component of the complex and changing environment we live in. Continued research and advancement in disaster mitigation through prediction of and preparation for impacts have undoubtedly saved many lives and prevented significant amounts of damage, but it is inevitable that some events will cause destruction and loss of life due to their sheer magnitude and proximity to built-up areas. Consequently, development of effective and efficient disaster response methodologies is a research topic of great interest. A successful emergency response is dependent on a comprehensive understanding of the scenario at hand. It is crucial to assess the state of the infrastructure and transportation network, so that resources can be allocated efficiently. Obstructions to the roadways are one of the biggest inhibitors to effective emergency response. To this end, airborne and satellite remote sensing platforms have been used extensively to collect overhead imagery and other types of data in the event of a natural disaster. The ability of these platforms to rapidly probe large areas is ideal in a situation where a timely response could result in saving lives. Typically, imagery is delivered to emergency management officials who then visually inspect it to determine where roads are obstructed and buildings have collapsed. Manual interpretation of imagery is a slow process and is limited by the quality of the imagery and what the human eye can perceive. In order to overcome the time and resource limitations of manual interpretation, this dissertation inves- tigated the feasibility of performing fully automated post-disaster analysis of roadways and buildings using airborne remote sensing data. First, a novel algorithm for detecting roadway debris piles from airborne light detection and ranging (lidar) point clouds and estimating their volumes is presented. Next, a method for detecting roadway flooding in aerial imagery and estimating the depth of the water using digital elevation models (DEMs) is introduced. Finally, a technique for assessing building damage from airborne lidar point clouds is presented. All three methods are demonstrated using remotely sensed data that were collected in the wake of recent natural disasters. The research presented in this dissertation builds a case for the use of automatic, algorithmic analysis of road networks and buildings after a disaster. By reducing the latency between the disaster and the delivery of damage maps needed to make executive decisions about resource allocation and performing search and rescue missions, significant loss reductions could be achieved.
Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.
2018-01-01
The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.
An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision
2018-01-01
ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision...needed. Do not return it to the originator. ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
NASA Astrophysics Data System (ADS)
Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.
2013-12-01
Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that allow its cost to remain low even with its increased functionality. Also, a new control software was also developed to ensure that the two cameras are triggered simultaneously. This is a major requirement that affects the final uncertainty of the measurements due to the constant movement of the clouds in the sky. Since accurate orientation of the cameras can be a very demanding task in field deployments, an automated calibration procedure has been developed, that removes the need for an accurate alignment. It consists on photographing the stars, which do not exhibit parallax due to the long distances involved, and deducing the inherent misalignments of the two cameras. The known misalignments are then used to correct the cloud photos. These developments will be described in the detail, along with an uncertainty analysis of the measurement setup. Measurements of cloud base height and atmospheric visibility will be presented and compared with measurements from other in-situ instruments. This work was supported by FCT project PTDC/CTE-ATM/115833/2009 and Program COMPETE FCOMP-01-0124-FEDER-014508
Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.
Stockton, David B; Santamaria, Fidel
2017-10-01
We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.
NASA Astrophysics Data System (ADS)
Schulz, Hans Martin; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg
2016-03-01
The mountain cloud forest of Taiwan can be delimited from other forest types using a map of the ground fog frequency. In order to create such a frequency map from remotely sensed data, an algorithm able to detect ground fog is necessary. Common techniques for ground fog detection based on weather satellite data cannot be applied to fog occurrences in Taiwan as they rely on several assumptions regarding cloud properties. Therefore a new statistical method for the detection of ground fog in mountainous terrain from MODIS Collection 051 data is presented. Due to the sharpening of input data using MODIS bands 1 and 2, the method provides fog masks in a resolution of 250 m per pixel. The new technique is based on negative correlations between optical thickness and terrain height that can be observed if a cloud that is relatively plane-parallel is truncated by the terrain. A validation of the new technique using camera data has shown that the quality of fog detection is comparable to that of another modern fog detection scheme developed and validated for the temperate zones. The method is particularly applicable to optically thinner water clouds. Beyond a cloud optical thickness of ≈ 40, classification errors significantly increase.
Si, Lei; Wang, Zhongbin; Yang, Yinwei
2014-01-01
In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358
Local effects of partly-cloudy skies on solar and emitted radiation
NASA Technical Reports Server (NTRS)
Whitney, D. A.; Venable, D. D.
1982-01-01
A computer automated data acquisition system for atmospheric emittance, and global solar, downwelled diffuse solar, and direct solar irradiances is discussed. Hourly-integrated global solar and atmospheric emitted radiances were measured continuously from February 1981 and hourly-integrated diffuse solar and direct solar irradiances were measured continuously from October 1981. One-minute integrated data are available for each of these components from February 1982. The results of the correlation of global insolation with fractional cloud cover for the first year's data set. A February data set, composed of one-minute integrated global insolation and direct solar irradiance, cloud cover fractions, meteorological data from nearby weather stations, and GOES East satellite radiometric data, was collected to test the theoretical model of satellite radiometric data correlation and develop the cloud dependence for the local measurement site.
Using Himawari-8, estimation of SO2 cloud altitude at Aso volcano eruption, on October 8, 2016
NASA Astrophysics Data System (ADS)
Ishii, Kensuke; Hayashi, Yuta; Shimbori, Toshiki
2018-02-01
It is vital to detect volcanic plumes as soon as possible for volcanic hazard mitigation such as aviation safety and the life of residents. Himawari-8, the Japan Meteorological Agency's (JMA's) geostationary meteorological satellite, has high spatial resolution and sixteen observation bands including the 8.6 μm band to detect sulfur dioxide (SO2). Therefore, Ash RGB composite images (RED: brightness temperature (BT) difference between 12.4 and 10.4 μm, GREEN: BT difference between 10.4 and 8.6 μm, BLUE: 10.4 μm) discriminate SO2 clouds and volcanic ash clouds from meteorological clouds. Since the Himawari-8 has also high temporal resolution, the real-time monitoring of ash and SO2 clouds is of great use. A phreatomagmatic eruption of Aso volcano in Kyushu, Japan, occurred at 01:46 JST on October 8, 2016. For this eruption, the Ash RGB could detect SO2 cloud from Aso volcano immediately after the eruption and track it even 12 h after. In this case, the Ash RGB images every 2.5 min could clearly detect the SO2 cloud that conventional images such as infrared and split window could not detect sufficiently. Furthermore, we could estimate the height of the SO2 cloud by comparing the Ash RGB images and simulations of the JMA Global Atmospheric Transport Model with a variety of height parameters. As a result of comparison, the top and bottom height of the SO2 cloud emitted from the eruption was estimated as 7 and 13-14 km, respectively. Assuming the plume height was 13-14 km and eruption duration was 160-220 s (as estimated by seismic observation), the total emission mass of volcanic ash from the eruption was estimated as 6.1-11.8 × 108 kg, which is relatively consistent with 6.0-6.5 × 108 kg from field survey. [Figure not available: see fulltext.
A Cloud-Based System for Automatic Hazard Monitoring from Sentinel-1 SAR Data
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Arko, S. A.; Hogenson, K.; McAlpin, D. B.; Whitley, M. A.
2017-12-01
Despite the all-weather capabilities of Synthetic Aperture Radar (SAR), and its high performance in change detection, the application of SAR for operational hazard monitoring was limited in the past. This has largely been due to high data costs, slow product delivery, and limited temporal sampling associated with legacy SAR systems. Only since the launch of ESA's Sentinel-1 sensors have routinely acquired and free-of-charge SAR data become available, allowing—for the first time—for a meaningful contribution of SAR to disaster monitoring. In this paper, we present recent technical advances of the Sentinel-1-based SAR processing system SARVIEWS, which was originally built to generate hazard products for volcano monitoring centers. We outline the main functionalities of SARVIEWS including its automatic database interface to Sentinel-1 holdings of the Alaska Satellite Facility (ASF), and its set of automatic processing techniques. Subsequently, we present recent system improvements that were added to SARVIEWS and allowed for a vast expansion of its hazard services; specifically: (1) In early 2017, the SARVIEWS system was migrated into the Amazon Cloud, providing access to cloud capabilities such as elastic scaling of compute resources and cloud-based storage; (2) we co-located SARVIEWS with ASF's cloud-based Sentinel-1 archive, enabling the efficient and cost effective processing of large data volumes; (3) we integrated SARVIEWS with ASF's HyP3 system (http://hyp3.asf.alaska.edu/), providing functionality such as subscription creation via API or map interface as well as automatic email notification; (4) we automated the production chains for seismic and volcanic hazards by integrating SARVIEWS with the USGS earthquake notification service (ENS) and the USGS eruption alert system. Email notifications from both services are parsed and subscriptions are automatically created when certain event criteria are met; (5) finally, SARVIEWS-generated hazard products are now being made available to the public via the SARVIEWS hazard portal. These improvements have led to the expansion of SARVIEWS toward a broader set of hazard situations, now including volcanoes, earthquakes, and severe weather. We provide details on newly developed techniques and show examples of disasters for which SARVIEWS was invoked.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan
2013-06-27
Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing
2013-01-01
Background Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Results Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Conclusions Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html. PMID:23802613
Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai
2017-09-30
In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.
THOR: Cloud Thickness from Off beam Lidar Returns
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; McGill, Matthew; Kolasinski, John; Varnai, Tamas; Yetzer, Ken
2004-01-01
Conventional wisdom is that lidar pulses do not significantly penetrate clouds having optical thickness exceeding about tau = 2, and that no returns are detectable from more than a shallow skin depth. Yet optically thicker clouds of tau much greater than 2 reflect a larger fraction of visible photons, and account for much of Earth s global average albedo. As cloud layer thickness grows, an increasing fraction of reflected photons are scattered multiple times within the cloud, and return from a diffuse concentric halo that grows around the incident pulse, increasing in horizontal area with layer physical thickness. The reflected halo is largely undetected by narrow field-of-view (FoV) receivers commonly used in lidar applications. THOR - Thickness from Off-beam Returns - is an airborne wide-angle detection system with multiple FoVs, capable of observing the diffuse halo, detecting wide-angle signal from which physical thickness of optically thick clouds can be retrieved. In this paper we describe the THOR system, demonstrate that the halo signal is stronger for thicker clouds, and validate physical thickness retrievals for clouds having z > 20, from NASA P-3B flights over the Department of Energy/Atmospheric Radiation Measurement/Southern Great Plains site, using the lidar, radar and other ancillary ground-based data.
Atmospheric Science Data Center
2013-04-15
article title: Casting Light and Shadows on a Saharan Dust Storm ... ocean and dust layer, which are visible in shades of blue and tan, respectively. In the lower panel, heights derived from automated ... cast by the cirrus clouds onto the dust (indicated by blue and cyan pixels) provide sufficient spatial contrast for a retrieval of ...
An Automated Cloud Observation System (ACOS).
1980-12-17
IQ 4. TITLE (and SubliIII.J 5- TYPE OF REPORT & PERIOD COVERED AN.JUTOXMATrD A LOUD OBSERVATION Scientific . Interim. SYSTEM (ALLIS) 6 PERFORMING ORG...p oci -dIi IV ( ( )S). . hhe 6 lists the percentage of agreement realizied by -:~ tc thc fiftt en metthods (I’ p touping ceilomieter t’ ata (see Ta
NASA Astrophysics Data System (ADS)
Lee, Sanghee; Hwang, Seung-On; Kim, Jhoon; Ahn, Myoung-Hwan
2018-03-01
Clouds are an important component of the atmosphere that affects both climate and weather, however, their contributions can be very difficult to determine. Ceilometer measurements can provide high resolution information on atmospheric conditions such as cloud base height (CBH) and vertical frequency of cloud occurrence (CVF). This study presents the first comprehensive analysis of CBH and CVF derived using Vaisala CL51 ceilometers at two urban stations in Seoul, Korea, during a three-year period from January 2014 to December 2016. The average frequency of cloud occurrence detected by the ceilometers is 54.3%. It is found that the CL51 is better able to capture CBH as compared to another ceilometer CL31 at a nearby meteorological station because it could detect high clouds more accurately. Frequency distributions for CBH up to 13,000 m providing detailed vertical features with 500-m interval show 55% of CBHs below 2 km for aggregated CBHs. A bimodal frequency distribution was observed for three-layers CBHs. A monthly variation of CVF reveals that frequency concentration of lower clouds is found in summer and winter, and higher clouds more often detected in spring and autumn. Monthly distribution features of cloud occurrence and precipitation are depending on seasons and it might be easy to define their relationship due to higher degree of variability of precipitation than cloud occurrence. However, a fluctuation of cloud occurrence frequency in summer is similar to precipitation in trend, whereas clouds in winter are relatively frequent but precipitation is not accompanied. In addition, recent decrease of summer precipitation could be mostly explained by a decrease of cloud occurrence. Anomalous precipitation recorded sometimes is considerably related to corresponding cloud occurrence. The diurnal and daily variations of CBH and CVF from ceilometer observations and the analysis of microwave radiometer measurements for two typical cloudiness cases are also reviewed in parallel. This analysis in finer temporal scale exhibits that utilization of ground-based observations together could help to analyze the cloud behaviors.
Augmenting Space Technology Program Management with Secure Cloud & Mobile Services
NASA Technical Reports Server (NTRS)
Hodson, Robert F.; Munk, Christopher; Helble, Adelle; Press, Martin T.; George, Cory; Johnson, David
2017-01-01
The National Aeronautics and Space Administration (NASA) Game Changing Development (GCD) program manages technology projects across all NASA centers and reports to NASA headquarters regularly on progress. Program stakeholders expect an up-to-date, accurate status and often have questions about the program's portfolio that requires a timely response. Historically, reporting, data collection, and analysis were done with manual processes that were inefficient and prone to error. To address these issues, GCD set out to develop a new business automation solution. In doing this, the program wanted to leverage the latest information technology platforms and decided to utilize traditional systems along with new cloud-based web services and gaming technology for a novel and interactive user environment. The team also set out to develop a mobile solution for anytime information access. This paper discusses a solution to these challenging goals and how the GCD team succeeded in developing and deploying such a system. The architecture and approach taken has proven to be effective and robust and can serve as a model for others looking to develop secure interactive mobile business solutions for government or enterprise business automation.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Development of Time-Series Human Settlement Mapping System Using Historical Landsat Archive
NASA Astrophysics Data System (ADS)
Miyazaki, H.; Nagai, M.; Shibasaki, R.
2016-06-01
Methodology of automated human settlement mapping is highly needed for utilization of historical satellite data archives for urgent issues of urban growth in global scale, such as disaster risk management, public health, food security, and urban management. As development of global data with spatial resolution of 10-100 m was achieved by some initiatives using ASTER, Landsat, and TerraSAR-X, next goal has targeted to development of time-series data which can contribute to studies urban development with background context of socioeconomy, disaster risk management, public health, transport and other development issues. We developed an automated algorithm to detect human settlement by classification of built-up and non-built-up in time-series Landsat images. A machine learning algorithm, Local and Global Consistency (LLGC), was applied with improvements for remote sensing data. The algorithm enables to use MCD12Q1, a MODIS-based global land cover map with 500-m resolution, as training data so that any manual process is not required for preparation of training data. In addition, we designed the method to composite multiple results of LLGC into a single output to reduce uncertainty. The LLGC results has a confidence value ranging 0.0 to 1.0 representing probability of built-up and non-built-up. The median value of the confidence for a certain period around a target time was expected to be a robust output of confidence to identify built-up or non-built-up areas against uncertainties in satellite data quality, such as cloud and haze contamination. Four scenes of Landsat data for each target years, 1990, 2000, 2005, and 2010, were chosen among the Landsat archive data with cloud contamination less than 20%.We developed a system with the algorithms on the Data Integration and Analysis System (DIAS) in the University of Tokyo and processed 5200 scenes of Landsat data for cities with more than one million people worldwide.
From data to information and knowledge for geospatial applications
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.; Yoon, T.
2006-12-01
An ever-increasing number of airborne and spaceborne data-acquisition missions with various sensors produce a glut of data. Sensory data rarely contains information in a explicit form such that an application can directly use it. The processing and analyzing of data constitutes a real bottleneck; therefore, automating the processes of gaining useful information and knowledge from the raw data is of paramount interest. This presentation is concerned with the transition from data to information and knowledge. With data we refer to the sensor output and we notice that data provide very rarely direct answers for applications. For example, a pixel in a digital image or a laser point from a LIDAR system (data) have no direct relationship with elevation changes of topographic surfaces or the velocity of a glacier (information, knowledge). We propose to employ the computer vision paradigm to extract information and knowledge as it pertains to a wide range of geoscience applications. After introducing the paradigm we describe the major steps to be undertaken for extracting information and knowledge from sensory input data. Features play an important role in this process. Thus we focus on extracting features and their perceptual organization to higher order constructs. We demonstrate these concepts with imaging data and laser point clouds. The second part of the presentation addresses the problem of combining data obtained by different sensors. An absolute prerequisite for successful fusion is to establish a common reference frame. We elaborate on the concept of sensor invariant features that allow the registration of such disparate data sets as aerial/satellite imagery, 3D laser point clouds, and multi/hyperspectral imagery. Fusion takes place on the data level (sensor registration) and on the information level. We show how fusion increases the degree of automation for reconstructing topographic surfaces. Moreover, fused information gained from the three sensors results in a more abstract surface representation with a rich set of explicit surface information that can be readily used by an analyst for applications such as change detection.
Addressing scale dependence in roughness and morphometric statistics derived from point cloud data.
NASA Astrophysics Data System (ADS)
Buscombe, D.; Wheaton, J. M.; Hensleigh, J.; Grams, P. E.; Welcker, C. W.; Anderson, K.; Kaplinski, M. A.
2015-12-01
The heights of natural surfaces can be measured with such spatial density that almost the entire spectrum of physical roughness scales can be characterized, down to the morphological form and grain scales. With an ability to measure 'microtopography' comes a demand for analytical/computational tools for spatially explicit statistical characterization of surface roughness. Detrended standard deviation of surface heights is a popular means to create continuous maps of roughness from point cloud data, using moving windows and reporting window-centered statistics of variations from a trend surface. If 'roughness' is the statistical variation in the distribution of relief of a surface, then 'texture' is the frequency of change and spatial arrangement of roughness. The variance in surface height as a function of frequency obeys a power law. In consequence, roughness is dependent on the window size through which it is examined, which has a number of potential disadvantages: 1) the choice of window size becomes crucial, and obstructs comparisons between data; 2) if windows are large relative to multiple roughness scales, it is harder to discriminate between those scales; 3) if roughness is not scaled by the texture length scale, information on the spacing and clustering of roughness `elements' can be lost; and 4) such practice is not amenable to models describing the scattering of light and sound from rough natural surfaces. We discuss the relationship between roughness and texture. Some useful parameters which scale vertical roughness to characteristic horizontal length scales are suggested, with examples of bathymetric point clouds obtained using multibeam from two contrasting riverbeds, namely those of the Colorado River in Grand Canyon, and the Snake River in Hells Canyon. Such work, aside from automated texture characterization and texture segmentation, roughness and grain size calculation, might also be useful for feature detection and classification from point clouds.
Multilayered Clouds Identification and Retrieval for CERES Using MODIS
NASA Technical Reports Server (NTRS)
Sun-Mack, Sunny; Minnis, Patrick; Chen, Yan; Yi, Yuhong; Huang, Jainping; Lin, Bin; Fan, Alice; Gibson, Sharon; Chang, Fu-Lung
2006-01-01
Traditionally, analyses of satellite data have been limited to interpreting the radiances in terms of single layer clouds. Generally, this results in significant errors in the retrieved properties for multilayered cloud systems. Two techniques for detecting overlapped clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. The first technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other method uses microwave (MWR) data. The use of BTD, the 11-12 micrometer brightness temperature difference, in conjunction with tau, the retrieved visible optical depth, was suggested by Kawamoto et al. (2001) and used by Pavlonis et al. (2004) as a means to detect multilayered clouds. Combining visible (VIS; 0.65 micrometer) and infrared (IR) retrievals of cloud properties with microwave (MW) retrievals of cloud water temperature Tw and liquid water path LWP retrieved from satellite microwave imagers appears to be a fruitful approach for detecting and retrieving overlapped clouds (Lin et al., 1998, Ho et al., 2003, Huang et al., 2005). The BTD method is limited to optically thin cirrus over low clouds, while the MWR method is limited to ocean areas only. With the availability of VIS and IR data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and MW data from the Advanced Microwave Scanning Radiometer EOS (AMSR-E), both on Aqua, it is now possible to examine both approaches simultaneously. This paper explores the use of the BTD method as applied to MODIS and AMSR-E data taken from the Aqua satellite over non-polar ocean surfaces.
Henderson, Jette; Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C
2018-05-04
Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. ©Jette Henderson, Junyuan Ke, Joyce C Ho, Joydeep Ghosh, Byron C Wallace. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.05.2018.
Summers, Thomas; Johnson, Viviana V; Stephan, John P; Johnson, Gloria J; Leonard, George
2009-08-01
Massive transfusion of D- trauma patients in the combat setting involves the use of D+ red blood cells (RBCs) or whole blood along with suboptimal pretransfusion test result documentation. This presents challenges to the transfusion service of tertiary care military hospitals who ultimately receive these casualties because initial D typing results may only reflect the transfused RBCs. After patients are stabilized, mixed-field reaction results on D typing indicate the patient's true inherited D phenotype. This case series illustrates the utility of automated gel column agglutination in detecting mixed-field reactions in these patients. The transfusion service test results, including the automated gel column agglutination D typing results, of four massively transfused D- patients transfused D+ RBCs is presented. To test the sensitivity of the automated gel column agglutination method in detecting mixed-field agglutination reactions, a comparative analysis of three automated technologies using predetermined mixtures of D+ and D- RBCs is also presented. The automated gel column agglutination method detected mixed-field agglutination in D typing in all four patients and in the three prepared control specimens. The automated microwell tube method identified one of the three prepared control specimens as indeterminate, which was subsequently manually confirmed as a mixed-field reaction. The automated solid-phase method was unable to detect any mixed fields. The automated gel column agglutination method provides a sensitive means for detecting mixed-field agglutination reactions in the determination of the true inherited D phenotype of combat casualties transfused massive amounts of D+ RBCs.
The EOS CERES Global Cloud Mask
NASA Technical Reports Server (NTRS)
Berendes, T. A.; Welch, R. M.; Trepte, Q.; Schaaf, C.; Baum, B. A.
1996-01-01
To detect long-term climate trends, it is essential to produce long-term and consistent data sets from a variety of different satellite platforms. With current global cloud climatology data sets, such as the International Satellite Cloud Climatology Experiment (ISCCP) or CLAVR (Clouds from Advanced Very High Resolution Radiometer), one of the first processing steps is to determine whether an imager pixel is obstructed between the satellite and the surface, i.e., determine a cloud 'mask.' A cloud mask is essential to studies monitoring changes over ocean, land, or snow-covered surfaces. As part of the Earth Observing System (EOS) program, a series of platforms will be flown beginning in 1997 with the Tropical Rainfall Measurement Mission (TRMM) and subsequently the EOS-AM and EOS-PM platforms in following years. The cloud imager on TRMM is the Visible/Infrared Sensor (VIRS), while the Moderate Resolution Imaging Spectroradiometer (MODIS) is the imager on the EOS platforms. To be useful for long term studies, a cloud masking algorithm should produce consistent results between existing (AVHRR) data, and future VIRS and MODIS data. The present work outlines both existing and proposed approaches to detecting cloud using multispectral narrowband radiance data. Clouds generally are characterized by higher albedos and lower temperatures than the underlying surface. However, there are numerous conditions when this characterization is inappropriate, most notably over snow and ice of the cloud types, cirrus, stratocumulus and cumulus are the most difficult to detect. Other problems arise when analyzing data from sun-glint areas over oceans or lakes over deserts or over regions containing numerous fires and smoke. The cloud mask effort builds upon operational experience of several groups that will now be discussed.
Cloud cover detection combining high dynamic range sky images and ceilometer measurements
NASA Astrophysics Data System (ADS)
Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.
2017-11-01
This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.
Automated System for Early Breast Cancer Detection in Mammograms
NASA Technical Reports Server (NTRS)
Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.
1993-01-01
The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.
Ship detection from high-resolution imagery based on land masking and cloud filtering
NASA Astrophysics Data System (ADS)
Jin, Tianming; Zhang, Junping
2015-12-01
High resolution satellite images play an important role in target detection application presently. This article focuses on the ship target detection from the high resolution panchromatic images. Taking advantage of geographic information such as the coastline vector data provided by NOAA Medium Resolution Coastline program, the land region is masked which is a main noise source in ship detection process. After that, the algorithm tries to deal with the cloud noise which appears frequently in the ocean satellite images, which is another reason for false alarm. Based on the analysis of cloud noise's feature in frequency domain, we introduce a windowed noise filter to get rid of the cloud noise. With the help of morphological processing algorithms adapted to target detection, we are able to acquire ship targets in fine shapes. In addition, we display the extracted information such as length and width of ship targets in a user-friendly way i.e. a KML file interpreted by Google Earth.
NASA Astrophysics Data System (ADS)
Hammitzsch, Martin; Spazier, Johannes; Reißland, Sven
2016-04-01
The TRIDEC Cloud is a platform that merges several complementary cloud-based services for instant tsunami propagation calculations and automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The platform offers a modern web-based graphical user interface so that operators in warning centres and stakeholders of other involved parties (e.g. CPAs, ministries) just need a standard web browser to access a full-fledged early warning and information system with unique interactive features such as Cloud Messages and Shared Maps. Furthermore, the TRIDEC Cloud can be accessed in different modes, e.g. the monitoring mode, which provides important functionality required to act in a real event, and the exercise-and-training mode, which enables training and exercises with virtual scenarios re-played by a scenario player. The software system architecture and open interfaces facilitate global coverage so that the system is applicable for any region in the world and allow the integration of different sensor systems as well as the integration of other hazard types and use cases different to tsunami early warning. Current advances of the TRIDEC Cloud platform will be summarized in this presentation.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
Designing of smart home automation system based on Raspberry Pi
NASA Astrophysics Data System (ADS)
Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar; Wattanawisuth, Nattapol; Leeprechanon, Nopbhorn
2016-03-01
Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pins of Raspberry Pi by pressing the corresponding key for turning "on" and "off" of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.
Designing of smart home automation system based on Raspberry Pi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar
Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pinsmore » of Raspberry Pi by pressing the corresponding key for turning “on” and “off” of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.« less
[The application of wavelet analysis of remote detection of pollution clouds].
Zhang, J; Jiang, F
2001-08-01
The discrete wavelet transform (DWT) is used to analyse the spectra of pollution clouds in complicated environment and extract the small-features. The DWT is a time-frequency analysis technology, which detects the subtle small changes in the target spectrum. The results show that the DWT is a quite effective method to extract features of target-cloud and improve the reliability of monitoring alarm system.
MPLNET V3 Cloud and Planetary Boundary Layer Detection
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Welton, Ellsworth J.; Campbell, James R.; Haftings, Phillip C.
2016-01-01
The NASA Micropulse Lidar Network Version 3 algorithms for planetary boundary layer and cloud detection are described and differences relative to the previous Version 2 algorithms are highlighted. A year of data from the Goddard Space Flight Center site in Greenbelt, MD consisting of diurnal and seasonal trends is used to demonstrate the results. Both the planetary boundary layer and cloud algorithms show significant improvement of the previous version.
Object Detection using the Kinect
2012-03-01
Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously
C+ detection of warm dark gas in diffuse clouds
NASA Astrophysics Data System (ADS)
Langer, W. D.; Velusamy, T.; Pineda, J. L.; Goldsmith, P. F.; Li, D.; Yorke, H. W.
2010-10-01
We present the first results of the Herschel open time key program, Galactic Observations of Terahertz C+ (GOT C+) survey of the [CII] 2P3/2-2P1/2 fine-structure line at 1.9 THz (158 μm) using the HIFI instrument on Herschel. We detected 146 interstellar clouds along sixteen lines-of-sight towards the inner Galaxy. We also acquired HI and CO isotopologue data along each line-of-sight for analysis of the physical conditions in these clouds. Here we analyze 29 diffuse clouds (AV < 1.3 mag) in this sample characterized by having [CII] and HI emission, but no detectable CO. We find that [CII] emission is generally stronger than expected for diffuse atomic clouds, and in a number of sources is much stronger than anticipated based on their HI column density. We show that excess [CII] emission in these clouds is best explained by the presence of a significant diffuse warm H2, dark gas, component. This first [CII] 158 μm detection of warm dark gas demonstrates the value of this tracer for mapping this gas throughout the Milky Way and in galaxies. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
NASA Astrophysics Data System (ADS)
Henneberger, J.; Fugal, J. P.; Stetzer, O.; Lohmann, U.
2013-05-01
Measurements of the microphysical properties of mixed-phase clouds with high spatial resolution are important to understand the processes inside these clouds. This work describes the design and characterization of the newly developed ground-based field instrument HOLIMO II (HOLographic Imager for Microscopic Objects II). HOLIMO II uses digital in-line holography to in-situ image cloud particles in a well defined sample volume. By an automated algorithm, two-dimensional images of single cloud particles between 6 and 250 μm in diameter are obtained and the size spectrum, the concentration and water content of clouds are calculated. By testing the sizing algorithm with monosized beads a systematic overestimation near the resolution limit was found, which has been used to correct the measurements. Field measurements from the high altitude research station Jungfraujoch, Switzerland, are presented. The measured number size distributions are in good agreement with parallel measurements by a fog monitor (FM-100, DMT, Boulder USA). The field data shows that HOLIMO II is capable of measuring the number size distribution with a high spatial resolution and determines ice crystal shape, thus providing a method of quantifying variations in microphysical properties. A case study over a period of 8 h has been analyzed, exploring the transition from a liquid to a mixed-phase cloud, which is the longest observation of a cloud with a holographic device. During the measurement period, the cloud does not completely glaciate, contradicting earlier assumptions of the dominance of the Wegener-Bergeron-Findeisen (WBF) process.
NASA Astrophysics Data System (ADS)
Henneberger, J.; Fugal, J. P.; Stetzer, O.; Lohmann, U.
2013-11-01
Measurements of the microphysical properties of mixed-phase clouds with high spatial resolution are important to understand the processes inside these clouds. This work describes the design and characterization of the newly developed ground-based field instrument HOLIMO II (HOLographic Imager for Microscopic Objects II). HOLIMO II uses digital in-line holography to in situ image cloud particles in a well-defined sample volume. By an automated algorithm, two-dimensional images of single cloud particles between 6 and 250 μm in diameter are obtained and the size spectrum, the concentration and water content of clouds are calculated. By testing the sizing algorithm with monosized beads a systematic overestimation near the resolution limit was found, which has been used to correct the measurements. Field measurements from the high altitude research station Jungfraujoch, Switzerland, are presented. The measured number size distributions are in good agreement with parallel measurements by a fog monitor (FM-100, DMT, Boulder USA). The field data shows that HOLIMO II is capable of measuring the number size distribution with a high spatial resolution and determines ice crystal shape, thus providing a method of quantifying variations in microphysical properties. A case study over a period of 8 h has been analyzed, exploring the transition from a liquid to a mixed-phase cloud, which is the longest observation of a cloud with a holographic device. During the measurement period, the cloud does not completely glaciate, contradicting earlier assumptions of the dominance of the Wegener-Bergeron-Findeisen (WBF) process.
Automatically detecting Himalayan Glacial Lake Outburst Floods in LANDSAT time series
NASA Astrophysics Data System (ADS)
Veh, Georg; Korup, Oliver; Roessner, Sigrid; Walz, Ariane
2017-04-01
More than 5,000 meltwater lakes currently exist in the Himalayas, and some of them have grown rapidly in past decades due to glacial retreat. This trend might raise the risk of Glacial Lake Outburst Floods (GLOFs), which have caused catastrophic damage and several hundred fatalities in historic time. Yet the growing number and size of Himalayan glacial lakes have no detectable counterpart in increasing GLOF frequency. Only 35 events are documented in detail since the 1950s, mostly in the Himalayas of Eastern Nepal and Bhutan. Observations are sparse in the far eastern and totally missing in the northwestern parts of the mountain belt. The GLOF record is prone to a censoring bias, such that mainly larger floods or flood impacts have been registered. Thus, establishing a more complete record and learning from past GLOFs is essential for hazard assessment and regional planning. To detect previously unreported GLOFs in the Himalayas, we developed an automated processing chain for generating GLOF related surface-cover time series from LANDSAT data. We downloaded more than 5,000 available LANDSAT TM, ETM+ and OLI images from 1987 to present. We trained a supervised machine-learning classifier with >4,000 randomly selected image pixels and topographic variables derived from digital topographic data (SRTM and ALOS DEMs), defining water, sediment, shadow, clouds, and ice as the five main classes. We hypothesize that GLOFs significantly decrease glacial lake area while increasing the amount of sediment cover in the channel network downstream simultaneously. Thus we excluded shadows, clouds, and lake ice from the analysis. We derived surface cover maps from the fitted model for each satellite image and compiled a pixelwise time-series stack. Customized rule sets were applied to systematically remove misclassifications and to check for a sediment fan in the flow path downstream of the former lake pixels. We verified our mapping approach on thirteen GLOFs documented in the study period. First evaluations suggest that our processing chain is capable of detecting the majority of the GLOFs independently, paving the way for a first complete inventory of Himalayan GLOFs derived from satellite images. Within the limits set by data quality, we expect to at least double the size of the existing GLOF database in the Himalayas for the study period. We discuss several challenges affecting our automated classification approach, such as the sensor resolution, the magnitude of change necessary for GLOF detection, and the role of ice cover on glacial lakes. The generated surface cover maps are a powerful resource for further applications in geomorphological research like monitoring the variability of supraglacial ponds or sediment dynamics in mountain valleys. Making use of the consistently growing and freely available LANDSAT archive, our workflow can be adapted and extended to various analyses in order to understand and quantify landscape dynamics in the Himalayas.
OT1_mputman_1: ASCII: All Sky observations of Galactic CII
NASA Astrophysics Data System (ADS)
Putman, M.
2010-07-01
The Milky Way and other galaxies require a significant source of ongoing star formation fuel to explain their star formation histories. A new ubiquitous population of discrete, cold clouds have recently been discovered at the disk-halo interface of our Galaxy that could potentially provide this source of fuel. We propose to observe a small sample of these disk-halo clouds with HIFI to determine if the level of [CII] emission detected suggests they represent the cooling of warm clouds at the interface between the star forming disk and halo. These cooling clouds are predicted by simulations of warm clouds moving into the disk-halo interface region. We target 5 clouds in this proposal for which we have high resolution HI maps and can observe the densest core of the cloud. The results of our observations will also be used to interpret the surprisingly high detections of [CII] for low HI column density clouds in the Galactic Plane by the GOT C+ Key Program by extending the clouds probed to high latitude environments.
Kerlikowske, Karla; Scott, Christopher G; Mahmoudzadeh, Amir P; Ma, Lin; Winham, Stacey; Jensen, Matthew R; Wu, Fang Fang; Malkov, Serghei; Pankratz, V Shane; Cummings, Steven R; Shepherd, John A; Brandt, Kathleen R; Miglioretti, Diana L; Vachon, Celine M
2018-06-05
In 30 states, women who have had screening mammography are informed of their breast density on the basis of Breast Imaging Reporting and Data System (BI-RADS) density categories estimated subjectively by radiologists. Variation in these clinical categories across and within radiologists has led to discussion about whether automated BI-RADS density should be reported instead. To determine whether breast cancer risk and detection are similar for automated and clinical BI-RADS density measures. Case-control. San Francisco Mammography Registry and Mayo Clinic. 1609 women with screen-detected cancer, 351 women with interval invasive cancer, and 4409 matched control participants. Automated and clinical BI-RADS density assessed on digital mammography at 2 time points from September 2006 to October 2014, interval and screen-detected breast cancer risk, and mammography sensitivity. Of women whose breast density was categorized by automated BI-RADS more than 6 months to 5 years before diagnosis, those with extremely dense breasts had a 5.65-fold higher interval cancer risk (95% CI, 3.33 to 9.60) and a 1.43-fold higher screen-detected risk (CI, 1.14 to 1.79) than those with scattered fibroglandular densities. Associations of interval and screen-detected cancer with clinical BI-RADS density were similar to those with automated BI-RADS density, regardless of whether density was measured more than 6 months to less than 2 years or 2 to 5 years before diagnosis. Automated and clinical BI-RADS density measures had similar discriminatory accuracy, which was higher for interval than screen-detected cancer (c-statistics: 0.70 vs. 0.62 [P < 0.001] and 0.72 vs. 0.62 [P < 0.001], respectively). Mammography sensitivity was similar for automated and clinical BI-RADS categories: fatty, 93% versus 92%; scattered fibroglandular densities, 90% versus 90%; heterogeneously dense, 82% versus 78%; and extremely dense, 63% versus 64%, respectively. Neither automated nor clinical BI-RADS density was assessed on tomosynthesis, an emerging breast screening method. Automated and clinical BI-RADS density similarly predict interval and screen-detected cancer risk, suggesting that either measure may be used to inform women of their breast density. National Cancer Institute.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Fully Automated Sunspot Detection and Classification Using SDO HMI Imagery in MATLAB
2014-03-27
FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Gordon M. Spahr, Second Lieutenant, USAF AFIT-ENP-14-M-34...CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Presented to the Faculty Department of Engineering Physics Graduate School of Engineering and Management Air...DISTRIUBUTION UNLIMITED. AFIT-ENP-14-M-34 FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB Gordon M. Spahr, BS Second
Comparison Between CCCM and CloudSat Radar-Lidar (RL) Cloud and Radiation Products
NASA Technical Reports Server (NTRS)
Ham, Seung-Hee; Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny
2015-01-01
To enhance cloud properties, LaRC and CIRA developed each combination algorithm for obtained properties from passive, active and imager in A-satellite constellation. When comparing global cloud fraction each other, LaRC-produced CERES-CALIPSO-CloudSat-MODIS (CCCM) products larger low-level cloud fraction over tropic ocean, while CIRA-produced Radar-Lidar (RL) shows larger mid-level cloud fraction for high latitude region. The reason for different low-level cloud fraction is due to different filtering method of lidar-detected cloud layers. Meanwhile difference in mid-level clouds is occurred due to different priority of cloud boundaries from lidar and radar.
Phase-partitioning in mixed-phase clouds - An approach to characterize the entire vertical column
NASA Astrophysics Data System (ADS)
Kalesse, H.; Luke, E. P.; Seifert, P.
2017-12-01
The characterization of the entire vertical profile of phase-partitioning in mixed-phase clouds is a challenge which can be addressed by synergistic profiling measurements with ground-based polarization lidars and cloud radars. While lidars are sensitive to small particles and can thus detect supercooled liquid (SCL) layers, cloud radar returns are dominated by larger particles (like ice crystals). The maximum lidar observation height is determined by complete signal attenuation at a penetrated optical depth of about three. In contrast, cloud radars are able to penetrate multiple liquid layers and can thus be used to expand the identification of cloud phase to the entire vertical column beyond the lidar extinction height, if morphological features in the radar Doppler spectrum can be related to the existence of SCL. Relevant spectral signatures such as bimodalities and spectral skewness can be related to cloud phase by training a neural network appropriately in a supervised learning scheme, with lidar measurements functioning as supervisor. The neural network output (prediction of SCL location) derived using cloud radar Doppler spectra can be evaluated with several parameters such as liquid water path (LWP) detected by microwave radiometer (MWR) and (liquid) cloud base detected by ceilometer or Raman lidar. The technique has been previously tested on data from Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) instruments in Barrow, Alaska and is in this study utilized for observations from the Leipzig Aerosol and Cloud Remote Observations System (LACROS) during the Analysis of the Composition of Clouds with Extended Polarization Techniques (ACCEPT) field experiment in Cabauw, Netherlands in Fall 2014. Comparisons to supercooled-liquid layers as classified by CLOUDNET are provided.
Optical property retrievals of subvisual cirrus clouds from OSIRIS limb-scatter measurements
NASA Astrophysics Data System (ADS)
Wiensz, J. T.; Degenstein, D. A.; Lloyd, N. D.; Bourassa, A. E.
2012-08-01
We present a technique for retrieving the optical properties of subvisual cirrus clouds detected by OSIRIS, a limb-viewing satellite instrument that measures scattered radiances from the UV to the near-IR. The measurement set is composed of a ratio of limb radiance profiles at two wavelengths that indicates the presence of cloud-scattering regions. Optical properties from an in-situ database are used to simulate scattering by cloud-particles. With appropriate configurations discussed in this paper, the SASKTRAN successive-orders of scatter radiative transfer model is able to simulate accurately the in-cloud radiances from OSIRIS. Configured in this way, the model is used with a multiplicative algebraic reconstruction technique (MART) to retrieve the cloud extinction profile for an assumed effective cloud particle size. The sensitivity of these retrievals to key auxiliary model parameters is shown, and it is demonstrated that the retrieved extinction profile models accurately the measured in-cloud radiances from OSIRIS. Since OSIRIS has an 11-yr record of subvisual cirrus cloud detections, the work described in this manuscript provides a very useful method for providing a long-term global record of the properties of these clouds.
Qu, Wei-ping; Liu, Wen-qing; Liu, Jian-guo; Lu, Yi-huai; Zhu, Jun; Qin, Min; Liu, Cheng
2006-11-01
In satellite remote-sensing detection, cloud as an interference plays a negative role in data retrieval. How to discern the cloud fields with high fidelity thus comes as a need to the following research. A new method rooting in atmospheric radiation characteristics of cloud layer, in the present paper, presents a sort of solution where single-band brightness variance ratio is used to detect the relative intensity of cloud clutter so as to delineate cloud field rapidly and exactly, and the formulae of brightness variance ratio of satellite image, image reflectance variance ratio, and brightness temperature variance ratio of thermal infrared image are also given to enable cloud elimination to produce data free from cloud interference. According to the variance of the penetrating capability for different spectra bands, an objective evaluation is done on cloud penetration of them with the factors that influence penetration effect. Finally, a multi-band data fusion task is completed using the image data of infrared penetration from cirrus nothus. Image data reconstruction is of good quality and exactitude to show the real data of visible band covered by cloud fields. Statistics indicates the consistency of waveband relativity with image data after the data fusion.
NASA Technical Reports Server (NTRS)
Uttal, Taneil; Frisch, Shelby; Wang, Xuan-Ji; Key, Jeff; Schweiger, Axel; Sun-Mack, Sunny; Minnis, Patrick
2005-01-01
A one year comparison is made of mean monthly values of cloud fraction and cloud optical depth over Barrow, Alaska (71 deg 19.378 min North, 156 deg 36.934 min West) between 35 GHz radar-based retrievals, the TOVS Pathfinder Path-P product, the AVHRR APP-X product, and a MODIS based cloud retrieval product from the CERES-Team. The data sets represent largely disparate spatial and temporal scales, however, in this paper, the focus is to provide a preliminary analysis of how the mean monthly values derived from these different data sets compare, and determine how they can best be used separately, and in combination to provide reliable estimates of long-term trends of changing cloud properties. The radar and satellite data sets described here incorporate Arctic specific modifications that account for cloud detection challenges specific to the Arctic environment. The year 2000 was chosen for this initial comparison because the cloud radar data was particularly continuous and reliable that year, and all of the satellite retrievals of interest were also available for the year 2000. Cloud fraction was chosen as a comparison variable as accurate detection of cloud is the primary product that is necessary for any other cloud property retrievals. Cloud optical depth was additionally selected as it is likely the single cloud property that is most closely correlated to cloud influences on surface radiation budgets.
NASA Astrophysics Data System (ADS)
Lagrosas, N.; Gacal, G. F. B.; Kuze, H.
2017-12-01
Detection of nighttime cloud from Himawari 8 is implemented using the difference of digital numbers from bands 13 (10.4µm) and 7 (3.9µm). The digital number difference of -1.39x104 can be used as a threshold to separate clouds from clear sky conditions. To look at observations from the ground over Chiba, a digital camera (Canon Powershot A2300) is used to take images of the sky every 5 minutes at an exposure time of 5s at the Center for Environmental Remote Sensing, Chiba University. From these images, cloud cover values are obtained using threshold algorithm (Gacal, et al, 2016). Ten minute nighttime cloud cover values from these two datasets are compared and analyzed from 29 May to 05 June 2017 (20:00-03:00 JST). When compared with lidar data, the camera can detect thick high level clouds up to 10km. The results show that during clear sky conditions (02-03 June), both camera and satellite cloud cover values show 0% cloud cover. During cloudy conditions (05-06 June), the camera shows almost 100% cloud cover while satellite cloud cover values range from 60 to 100%. These low values can be attributed to the presence of low-level thin clouds ( 2km above the ground) as observed from National Institute for Environmental Studies lidar located inside Chiba University. This difference of cloud cover values shows that the camera can produce accurate cloud cover values of low level clouds that are sometimes not detected by satellites. The opposite occurs when high level clouds are present (01-02 June). Derived satellite cloud cover shows almost 100% during the whole night while ground-based camera shows cloud cover values that range from 10 to 100% during the same time interval. The fluctuating values can be attributed to the presence of thin clouds located at around 6km from the ground and the presence of low level clouds ( 1km). Since the camera relies on the reflected city lights, it is possible that the high level thin clouds are not observed by the camera but is observed by the satellite. Also, this condition constitutes layers of clouds that are not observed by each camera. The results of this study show that one instrument can be used to correct each other to provide better cloud cover values. These corrections is dependent on the height and thickness of the clouds. No correction is necessary when the sky is clear.
Introduction and analysis of several FY3C-MWHTS cloud/rain screening methods
NASA Astrophysics Data System (ADS)
Li, Xiaoqing
2017-04-01
Data assimilation of satellite microwave sounders are very important for numerical weather prediction. Fengyun-3C (FY-3C),launched in September, 2013, has two such sounders: MWTS (MicroWave Temperature Sounder) and MWHTS (MicroWave Humidity and Temperature Sounder). These data should be quality-controlled before assimilation and cloud/rain detection is one of the crucial steps. This paper introduced different cloud/rain detection methods based on MWHTS, VIRR (Visible and InfraRed Radiometer) and MWRI (Microwave Radiation Imager) observations. We designed 6 cloud/rain detection combinations and then analyzed the application effect of these schemes. The difference between observations and model simulations for FY-3C MWHTS channels were calculated as a parameter for analysis. Both RTTOV and CRTM were used to fast simulate radiances of MWHTS channels.
Volcanic eruption detection with TOMS
NASA Technical Reports Server (NTRS)
Krueger, Arlin J.
1987-01-01
The Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) is designed for mapping of the atmospheric ozone distribution. Absorption by sulfur dioxide at the same ultraviolet spectral wavelengths makes it possible to observe and resolve the size of volcanic clouds. The sulfur dioxide absorption is discriminated from ozone and water clouds in the data processing by their spectral signatures. Thus, the sulfur dioxide can serve as a tracer which appears in volcanic eruption clouds because it is not present in other clouds. The detection limit with TOMS is close to the theoretical limit due to telemetry signal quantization of 1000 metric tons (5-sigma threshold) within the instrument field of view (50 by 50 km near the nadir). Requirements concerning the use of TOMS in detection of eruptions, geochemical cycles, and volcanic climatic effects are discussed.
Spatially explicit spectral analysis of point clouds and geospatial data
Buscombe, Daniel D.
2015-01-01
The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described, and its functionality illustrated with an example of a high-resolution bathymetric point cloud data collected with multibeam echosounder.
Comparison between SAGE II and ISCCP high-level clouds. 1: Global and zonal mean cloud amounts
NASA Technical Reports Server (NTRS)
Liao, Xiaohan; Rossow, William B.; Rind, David
1995-01-01
Global high-level clouds identified in Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation measurements for January and July in the period 1985 to 1990 are compared with near-nadir-looking observations from the International Satellite Cloud Climatology Project (ISCCP). Global and zonal mean high-level cloud amounts from the two data sets agree very well, if clouds with layer extinction coefficients of less than 0.008/km at 1.02 micrometers wavelength are removed from the SAGE II results and all detected clouds are interpreted to have an average horizontal size of about 75 km along the 200 km transimission path length of the SAGE II observations. The SAGE II results are much more sensitive to variations of assumed cloud size than to variations of detection threshold. The geographical distribution of cloud fractions shows good agreement, but systematic regional differences also indicate that the average cloud size varies somewhat among different climate regimes. The more sensitive SAGE II results show that about one third of all high-level clouds are missed by ISCCP but that these clouds have very low optical thicknesses (less than 0.1 at 0.6 micrometers wavelength). SAGE II sampling error in monthly zonal cloud fraction is shown to produce no bias, to be less than the intraseasonal natural variability, but to be comparable with the natural variability at longer time scales.
Low-Frequency Carbon Recombination Lines in the Orion Molecular Cloud Complex
NASA Astrophysics Data System (ADS)
Tremblay, Chenoa D.; Jordan, Christopher H.; Cunningham, Maria; Jones, Paul A.; Hurley-Walker, Natasha
2018-05-01
We detail tentative detections of low-frequency carbon radio recombination lines from within the Orion molecular cloud complex observed at 99-129 MHz. These tentative detections include one alpha transition and one beta transition over three locations and are located within the diffuse regions of dust observed in the infrared at 100 μm, the Hα emission detected in the optical, and the synchrotron radiation observed in the radio. With these observations, we are able to study the radiation mechanism transition from collisionally pumped to radiatively pumped within the H ii regions within the Orion molecular cloud complex.
Newly detected molecules in dense interstellar clouds
NASA Astrophysics Data System (ADS)
Irvine, William M.; Avery, L. W.; Friberg, P.; Matthews, H. E.; Ziurys, L. M.
Several new interstellar molecules have been identified including C2S, C3S, C5H, C6H and (probably) HC2CHO in the cold, dark cloud TMC-1; and the discovery of the first interstellar phosphorus-containing molecule, PN, in the Orion "plateau" source. Further results include the observations of 13C3H2 and C3HD, and the first detection of HCOOH (formic acid) in a cold cloud.
Quantifying biodiversity using digital cameras and automated image analysis.
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.
2009-04-01
Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.
[Automated analyzer of enzyme immunoassay].
Osawa, S
1995-09-01
Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.
ICE: An Automated Tool for Teaching Advanced C Programming
ERIC Educational Resources Information Center
Gonzalez, Ruben
2017-01-01
There are many difficulties with learning and teaching programming that can be alleviated with the use of software tools. Most of these tools have focused on the teaching of introductory programming concepts where commonly code fragments or small user programs are run in a sandbox or virtual machine, often in the cloud. These do not permit user…
Matthews, Stephen G; Miller, Amy L; Clapp, James; Plötz, Thomas; Kyriazakis, Ilias
2016-11-01
Early detection of health and welfare compromises in commercial piggeries is essential for timely intervention to enhance treatment success, reduce impact on welfare, and promote sustainable pig production. Behavioural changes that precede or accompany subclinical and clinical signs may have diagnostic value. Often referred to as sickness behaviour, this encompasses changes in feeding, drinking, and elimination behaviours, social behaviours, and locomotion and posture. Such subtle changes in behaviour are not easy to quantify and require lengthy observation input by staff, which is impractical on a commercial scale. Automated early-warning systems may provide an alternative by objectively measuring behaviour with sensors to automatically monitor and detect behavioural changes. This paper aims to: (1) review the quantifiable changes in behaviours with potential diagnostic value; (2) subsequently identify available sensors for measuring behaviours; and (3) describe the progress towards automating monitoring and detection, which may allow such behavioural changes to be captured, measured, and interpreted and thus lead to automation in commercial, housed piggeries. Multiple sensor modalities are available for automatic measurement and monitoring of behaviour, which require humans to actively identify behavioural changes. This has been demonstrated for the detection of small deviations in diurnal drinking, deviations in feeding behaviour, monitoring coughs and vocalisation, and monitoring thermal comfort, but not social behaviour. However, current progress is in the early stages of developing fully automated detection systems that do not require humans to identify behavioural changes; e.g., through automated alerts sent to mobile phones. Challenges for achieving automation are multifaceted and trade-offs are considered between health, welfare, and costs, between analysis of individuals and groups, and between generic and compromise-specific behaviours. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis.
Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E
2018-04-15
Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele. darrell.hurt@nih.gov.
Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis
Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E
2018-01-01
Abstract Motivation Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. Results The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. Availability and implementation https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele Contact darrell.hurt@nih.gov PMID:29028892
Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric
2014-01-29
Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.
Shang, Huazhe; Letu, Husi; Nakajima, Takashi Y; Wang, Ziming; Ma, Run; Wang, Tianxing; Lei, Yonghui; Ji, Dabin; Li, Shenshen; Shi, Jiancheng
2018-01-18
Analysis of cloud cover and its diurnal variation over the Tibetan Plateau (TP) is highly reliant on satellite data; however, the accuracy of cloud detection from both polar-orbiting and geostationary satellites over this area remains unclear. The new-generation geostationary Himawari-8 satellites provide high-resolution spatial and temporal information about clouds over the Tibetan Plateau. In this study, the cloud detection of MODIS and AHI is investigated and validated against CALIPSO measurements. For AHI and MODIS, the false alarm rate of AHI and MODIS in cloud identification over the TP was 7.51% and 1.94%, respectively, and the cloud hit rate was 73.55% and 80.15%, respectively. Using hourly cloud-cover data from the Himawari-8 satellites, we found that at the monthly scale, the diurnal cycle in cloud cover over the TP tends to increase throughout the day, with the minimum and maximum cloud fractions occurring at 10:00 a.m. and 18:00 p.m. local time. Due to the limited time resolution of polar-orbiting satellites, the underestimation of MODIS daytime average cloud cover is approximately 4.00% at the annual scale, with larger biases during the spring (5.40%) and winter (5.90%).
A composite large-scale CO survey at high galactic latitudes in the second quadrant
NASA Technical Reports Server (NTRS)
Heithausen, A.; Stacy, J. G.; De Vries, H. W.; Mebold, U.; Thaddeus, P.
1993-01-01
Surveys undertaken in the 2nd quadrant of the Galaxy with the CfA 1.2 m telescope have been combined to produce a map covering about 620 sq deg in the 2.6 mm CO(J = 1 - 0) line at high galactic latitudes. There is CO emission from molecular 'cirrus' clouds in about 13 percent of the region surveyed. The CO clouds are grouped together into three major cloud complexes with 29 individual members. All clouds are associated with infrared emission at 100 micron, although there is no one-to-one correlation between the corresponding intensities. CO emission is detected in all bright and dark Lynds' nebulae cataloged in that region; however not all CO clouds are visible on optical photographs as reflection or absorption features. The clouds are probably local. At an adopted distance of 240 pc cloud sizes range from O.1 to 30 pc and cloud masses from 1 to 1600 solar masses. The molecular cirrus clouds contribute between 0.4 and 0.8 M solar mass/sq pc to the surface density of molecular gas in the galactic plane. Only 26 percent of the 'infrared-excess clouds' in the area surveyed actually show CO and about 2/3 of the clouds detected in CO do not show an infrared excess.
NASA Astrophysics Data System (ADS)
Whelan, Gillian M.; Cawkwell, Fiona; Mannstein, Hermann; Minnis, Patrick
2010-12-01
Contrails, or 'condensation trails', produced in the wake of jet aircraft have been found to have a small but significant global net climate-warming effect [1]. When atmospheric conditions are favorable (i.e. when ambient atmospheric humidity is high and temperature is below a threshold value of typically less than -40oC), contrails can persist for several hours, grow to become several kilometers long and can also trigger additional cirrus- cloud formation as they spread - which can further impact climate! Due to Ireland's proximity to the North Atlantic Flight Corridor, large volumes of high-altitude overflights cross Ireland daily. Contrails are essentially artificial-linear-ice-clouds at a lower temperature than the surrounding atmosphere and so are visible in 1 Km satellite imagery at the 11 and 12 μm wavelengths; but are better detected in the temperature difference image between these two thermal channels. An automated Contrail Detection Algorithm (CDA) is applied to AATSR thermal imagery over Ireland, and the percentage contrail-coverage of each scene determined. Preliminary results, based on 2008 morning and evening AATSR overpasses show a similar annual average contrail-coverage when present of 0.25% and 0.19% respectively, even though air-traffic density is typically several times higher during the morning overpasses. Cases of excessive contrail-coverage, of up to 2.06% have been observed in combination with extensive cirrus-coverage over Ireland. Results from meteorological data indicate more highly favorable atmospheric conditions for contrail formation and persistence in 00h00 and 06h00 radiosonde ascents; which corresponds to a night-time peak in high-altitude flights over Ireland. Furthermore, exceptionally thick contrail-susceptible-atmospheric layers are found in conjunction with cases of excessive satellite-derived- contrail-coverage.
Characterizing the Frequency and Elevation of Rapid Drainage Events in West Greenland
NASA Astrophysics Data System (ADS)
Cooley, S.; Christoffersen, P.
2016-12-01
Rapid drainage of supraglacial lakes on the Greenland Ice Sheet is critical for the establishment of surface-to-bed hydrologic connections and the subsequent transfer of water from surface to bed. Yet, estimates of the number and spatial distribution of rapidly draining lakes vary widely due to limitations in the temporal frequency of image collection and obscureness by cloud. So far, no study has assessed the impact of these observation biases. In this study, we examine the frequency and elevation of rapidly draining lakes in central West Greenland, from 68°N to 72.6°N, and we make a robust statistical analysis to estimate more accurately the likelihood of lakes draining rapidly. Using MODIS imagery and a fully automated lake detection method, we map more than 500 supraglacial lakes per year over a 63000 km2 study area from 2000-2015. Through testing four different definitions of rapidly draining lakes from previously published studies, we find that the number of rapidly draining lakes varies from 3% to 38%. Logistic regression between rapid drainage events and image sampling frequency demonstrates that the number of rapid drainage events is strongly dependent on cloud-free observation percentage. We then develop three new drainage criteria and apply an observation bias correction that suggests a true rapid drainage probability between 36% and 45%, considerably higher than previous studies without bias assessment have reported. We find rapid-draining lakes are on average larger and disappear earlier than slow-draining lakes, and we also observe no elevation differences for the lakes detected as rapidly draining. We conclude a) that methodological problems in rapid drainage research caused by observation bias and varying detection methods have obscured large-scale rapid drainage characteristics and b) that the lack of evidence for an elevation limit on rapid drainage suggests surface-to-bed hydrologic connections may continue to propagate inland as climate warms.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Ben-David, Avishai; Davidson, Charles E; Embury, Janon F
2008-11-01
We introduced a two-dimensional radiative transfer model for aerosols in the thermal infrared [Appl. Opt.45, 6860-6875 (2006)APOPAI0003-693510.1364/AO.45.006860]. In that paper we superimposed two orthogonal plane-parallel layers to compute the radiance due to a two-dimensional (2D) rectangular aerosol cloud. In this paper we revisit the model and correct an error in the interaction of the two layers. We derive new expressions relating to the signal content of the radiance from an aerosol cloud based on the concept of five directional thermal contrasts: four for the 2D diffuse radiance and one for direct radiance along the line of sight. The new expressions give additional insight on the radiative transfer processes within the cloud. Simulations for Bacillus subtilis var. niger (BG) bioaerosol and dustlike kaolin aerosol clouds are compared and contrasted for two geometries: an airborne sensor looking down and a ground-based sensor looking up. Simulation results suggest that aerosol cloud detection from an airborne platform may be more challenging than for a ground-based sensor and that the detection of an aerosol cloud in emission mode (negative direct thermal contrast) is not the same as the detection of an aerosol cloud in absorption mode (positive direct thermal contrast).
Effects of Automation Types on Air Traffic Controller Situation Awareness and Performance
NASA Technical Reports Server (NTRS)
Sethumadhavan, A.
2009-01-01
The Joint Planning and Development Office has proposed the introduction of automated systems to help air traffic controllers handle the increasing volume of air traffic in the next two decades (JPDO, 2007). Because fully automated systems leave operators out of the decision-making loop (e.g., Billings, 1991), it is important to determine the right level and type of automation that will keep air traffic controllers in the loop. This study examined the differences in the situation awareness (SA) and collision detection performance of individuals when they worked with information acquisition, information analysis, decision and action selection and action implementation automation to control air traffic (Parasuraman, Sheridan, & Wickens, 2000). When the automation was unreliable, the time taken to detect an upcoming collision was significantly longer for all the automation types compared with the information acquisition automation. This poor performance following automation failure was mediated by SA, with lower SA yielding poor performance. Thus, the costs associated with automation failure are greater when automation is applied to higher order stages of information processing. Results have practical implications for automation design and development of SA training programs.
Schmidt, Jürgen; Laarousi, Rihab; Stolzmann, Wolfgang; Karrer-Gauß, Katja
2018-06-01
In this article, we examine the performance of different eye blink detection algorithms under various constraints. The goal of the present study was to evaluate the performance of an electrooculogram- and camera-based blink detection process in both manually and conditionally automated driving phases. A further comparison between alert and drowsy drivers was performed in order to evaluate the impact of drowsiness on the performance of blink detection algorithms in both driving modes. Data snippets from 14 monotonous manually driven sessions (mean 2 h 46 min) and 16 monotonous conditionally automated driven sessions (mean 2 h 45 min) were used. In addition to comparing two data-sampling frequencies for the electrooculogram measures (50 vs. 25 Hz) and four different signal-processing algorithms for the camera videos, we compared the blink detection performance of 24 reference groups. The analysis of the videos was based on very detailed definitions of eyelid closure events. The correct detection rates for the alert and manual driving phases (maximum 94%) decreased significantly in the drowsy (minus 2% or more) and conditionally automated (minus 9% or more) phases. Blinking behavior is therefore significantly impacted by drowsiness as well as by automated driving, resulting in less accurate blink detection.
Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon
2016-06-01
The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.
Repeated Induction of Inattentional Blindness in a Simulated Aviation Environment
NASA Technical Reports Server (NTRS)
Kennedy, Kellie D.; Stephens, Chad L.; Williams, Ralph A.; Schutte, Paul C.
2017-01-01
The study reported herein is a subset of a larger investigation on the role of automation in the context of the flight deck and used a fixed-based, human-in-the-loop simulator. This paper explored the relationship between automation and inattentional blindness (IB) occurrences in a repeated induction paradigm using two types of runway incursions. The critical stimuli for both runway incursions were directly relevant to primary task performance. Sixty non-pilot participants performed the final five minutes of a landing scenario twice in one of three automation conditions: full automation (FA), partial automation (PA), and no automation (NA). The first induction resulted in a 70 percent (42 of 60) detection failure rate with those in the PA condition significantly more likely to detect the incursion compared to the FA condition or the NA condition. The second induction yielded a 50 percent detection failure rate. Although detection improved (detection failure rates declined) in all conditions, those in the FA condition demonstrated the greatest improvement with doubled detection rates. The detection behavior in the first trial did not preclude a failed detection in the second induction. Group membership (IB vs. Detection) in the FA condition showed a greater improvement than those in the NA condition and rated the Mental Demand and Effort subscales of the NASA-TLX (NASA Task Load Index) significantly higher for Time 2 compared Time 1. Participants in the FA condition used the experience of IB exposure to improve task performance whereas those in the NA condition did not, indicating the availability and reallocation of attentional resources in the FA condition. These findings support the role of engagement in operational attention detriment and the consideration of attentional failure causation to determine appropriate mitigation strategies.
DSCOVR/EPIC observations of SO2 reveal dynamics of young volcanic eruption clouds
NASA Astrophysics Data System (ADS)
Carn, S. A.; Krotkov, N. A.; Taylor, S.; Fisher, B. L.; Li, C.; Bhartia, P. K.; Prata, F. J.
2017-12-01
Volcanic emissions of sulfur dioxide (SO2) and ash have been measured by ultraviolet (UV) and infrared (IR) sensors on US and European polar-orbiting satellites since the late 1970s. Although successful, the main limitation of these observations from low Earth orbit (LEO) is poor temporal resolution (once per day at low latitudes). Furthermore, most currently operational geostationary satellites cannot detect SO2, a key tracer of volcanic plumes, limiting our ability to elucidate processes in fresh, rapidly evolving volcanic eruption clouds. In 2015, the launch of the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) provided the first opportunity to observe volcanic clouds from the L1 Lagrange point. EPIC is a 10-band spectroradiometer spanning UV to near-IR wavelengths with two UV channels sensitive to SO2, and a ground resolution of 25 km. The unique L1 vantage point provides continuous observations of the sunlit Earth disk, from sunrise to sunset, offering multiple daily observations of volcanic SO2 and ash clouds in the EPIC field of view. When coupled with complementary retrievals from polar-orbiting UV and IR sensors such as the Ozone Monitoring Instrument (OMI), the Ozone Mapping and Profiler Suite (OMPS), and the Atmospheric Infrared Sounder (AIRS), we demonstrate how the increased observation frequency afforded by DSCOVR/EPIC permits more timely volcanic eruption detection and novel analyses of the temporal evolution of volcanic clouds. Although EPIC has detected several mid- to high-latitude volcanic eruptions since launch, we focus on recent eruptions of Bogoslof volcano (Aleutian Islands, AK, USA). A series of EPIC exposures from May 28-29, 2017, uniquely captures the evolution of SO2 mass in a young Bogoslof eruption cloud, showing separation of SO2- and ice-rich regions of the cloud. We show how analyses of these sequences of EPIC SO2 data can elucidate poorly understood processes in transient eruption clouds, such as the relative roles of H2S oxidation and ice scavenging in modifying volcanic SO2 emissions. Detection of these relatively small events also proves EPIC's ability to provide timely detection of volcanic clouds in the upper troposphere and lower stratosphere.
Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8
NASA Astrophysics Data System (ADS)
Joshi, P.
2015-12-01
Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.
Automated Corrosion Detection Program
2001-10-01
More detailed explanations of the methodology development can be found in Hidden Corrosion Detection Technology Assessment, a paper presented at...Detection Program, a paper presented at the Fourth Joint DoD/FAA/NASA Conference on Aging Aircraft, 2000. AS&M PULSE. The PULSE system, developed...selection can be found in The Evaluation of Hidden Corrosion Detection Technologies on the Automated Corrosion Detection Program, a paper presented
Speeding Clouds May Reveal Invisible Black Holes
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-07-01
Several small, speeding clouds have been discovered at the center of our galaxy. A new study suggests that these unusual objects may reveal the lurking presence of inactive black holes.Peculiar Cloudsa) Velocity-integrated intensity map showing the location of the two high-velocity compact clouds, HCN0.0090.044 and HCN0.0850.094, in the context of larger molecular clouds. b) and c) Latitude-velocity and longitude-velocity maps for HCN0.0090.044 and HCN0.0850.094, respectively. d) and e) spectra for the two compacts clouds, respectively. Click for a closer look. [Takekawa et al. 2017]Sgr A*, the supermassive black hole marking the center of our galaxy, is surrounded by a region roughly 650 light-years across known as the Central Molecular Zone. This area at the heart of our galaxy is filled with large amounts of warm, dense molecular gas that has a complex distribution and turbulent kinematics.Several peculiar gas clouds have been discovered within the Central Molecular Zone within the past two decades. These clouds, dubbed high-velocity compact clouds, are characterized by their compact sizes and extremely broad velocity widths.What created this mysterious population of energetic clouds? The recent discovery of two new high-velocity compact clouds, reported on in a paper led by Shunya Takekawa (Keio University, Japan), may help us to answer this question.Two More to the CountUsing the James Clerk Maxwell Telescope in Hawaii, Takekawa and collaborators detected the small clouds near the circumnuclear disk at the centermost part of our galaxy. These two clouds have velocity spreads of -80 to -20 km/s and -80 to 0 km/s and compact sizes of just over 1 light-year. The clouds similar appearances and physical properties suggest that they may both have been formed by the same process.Takekawa and collaborators explore and discard several possible origins for these clouds, such as outflows from massive protostars (no massive, luminous stars have been detected affiliated with these clouds), interaction with supernova remnants (no supernova remnants have been detected toward the clouds), and cloudcloud collisions (such collisions leave other signs, like cavities in the parent cloud, which are not detected here).Masses and velocities of black holes that could create the two high-velocity compact clouds fall above the red and blue lines here. [Takekawa et al. 2017]Revealed on the PlungeAs an alternative explanation, Takekawa and collaborators propose that these two small,speeding cloudswere each created when a massive compact object plunged into a nearby molecular cloud. Since we dont seeany luminous stellar counterparts to the high-velocity compact clouds, this suggests that the responsibleobjects were invisible black holes. As each black hole tore through a molecular cloud, it dragged some of the clouds gas along behind it to form the high-velocity compact cloud.Does this explanation make sense statistically? The authors point out that the number of black holes predicted to silently lurk in the central 30 light-years of the Milky Way is around 10,000. This makes it entirely plausible that we could have caught sight of two of them as they revealed their presence while plunging through molecular clouds.If the authors interpretation is correct, then high-velocity compact clouds provide an excellent opportunity: we can search for these speeding bodiesto potentially discover inactive black holes that would otherwise go undetected.CitationShunya Takekawa et al 2017 ApJL 843 L11. doi:10.3847/2041-8213/aa79ee
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.
Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E
2012-03-19
A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community
2012-01-01
Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2017-02-01
Acquisition of high density point clouds using terrestrial laser scanners (TLSs) has become commonplace in geomorphic science. The derived point clouds are often interpolated onto regular grids and the grids compared to detect change (i.e. erosion and deposition/advancement movements). This procedure is necessary for some applications (e.g. digital terrain analysis), but it inevitably leads to a certain loss of potentially valuable information contained within the point clouds. In the present study, an alternative methodology for geomorphological analysis and feature detection from point clouds is proposed. It rests on the use of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), applied to TLS data for a rock glacier front slope in the Swiss Alps. The proposed methods allowed the detection and isolation of movements directly from point clouds which yield to accuracies in the following computation of volumes that depend only on the actual registered distance between points. We demonstrated that these values are more conservative than volumes computed with the traditional DEM comparison. The results are illustrated for the summer of 2015, a season of enhanced geomorphic activity associated with exceptionally high temperatures.
Retrieval of subvisual cirrus cloud optical thickness from limb-scatter measurements
NASA Astrophysics Data System (ADS)
Wiensz, J. T.; Degenstein, D. A.; Lloyd, N. D.; Bourassa, A. E.
2013-01-01
We present a technique for estimating the optical thickness of subvisual cirrus clouds detected by OSIRIS (Optical Spectrograph and Infrared Imaging System), a limb-viewing satellite instrument that measures scattered radiances from the UV to the near-IR. The measurement set is composed of a ratio of limb radiance profiles at two wavelengths that indicates the presence of cloud-scattering regions. Cross-sections and phase functions from an in situ database are used to simulate scattering by cloud-particles. With appropriate configurations discussed in this paper, the SASKTRAN successive-orders of scatter radiative transfer model is able to simulate accurately the in-cloud radiances from OSIRIS. Configured in this way, the model is used with a multiplicative algebraic reconstruction technique (MART) to retrieve the cloud extinction profile for an assumed effective cloud particle size. The sensitivity of these retrievals to key auxiliary model parameters is shown, and it is shown that the retrieved extinction profile, for an assumed effective cloud particle size, models well the measured in-cloud radiances from OSIRIS. The greatest sensitivity of the retrieved optical thickness is to the effective cloud particle size. Since OSIRIS has an 11-yr record of subvisual cirrus cloud detections, the work described in this manuscript provides a very useful method for providing a long-term global record of the properties of these clouds.
Ten Years of Cloud Optical and Microphysical Retrievals from MODIS
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Galina; Hubanks, Paul; Arnold, G. Thomas; Amarasinghe, Nandana
2010-01-01
The MODIS cloud optical properties algorithm (MOD06/MYD06 for Terra and Aqua MODIS, respectively) has undergone extensive improvements and enhancements since the launch of Terra. These changes have included: improvements in the cloud thermodynamic phase algorithm; substantial changes in the ice cloud light scattering look up tables (LUTs); a clear-sky restoral algorithm for flagging heavy aerosol and sunglint; greatly improved spectral surface albedo maps, including the spectral albedo of snow by ecosystem; inclusion of pixel-level uncertainty estimates for cloud optical thickness, effective radius, and water path derived for three error sources that includes the sensitivity of the retrievals to solar and viewing geometries. To improve overall retrieval quality, we have also implemented cloud edge removal and partly cloudy detection (using MOD35 cloud mask 250m tests), added a supplementary cloud optical thickness and effective radius algorithm over snow and sea ice surfaces and over the ocean, which enables comparison with the "standard" 2.1 11m effective radius retrieval, and added a multi-layer cloud detection algorithm. We will discuss the status of the MOD06 algorithm and show examples of pixellevel (Level-2) cloud retrievals for selected data granules, as well as gridded (Level-3) statistics, notably monthly means and histograms (lD and 2D, with the latter giving correlations between cloud optical thickness and effective radius, and other cloud product pairs).
Global Analysis of Aerosol Properties Above Clouds
NASA Technical Reports Server (NTRS)
Waquet, F.; Peers, F.; Ducos, F.; Goloub, P.; Platnick, S. E.; Riedi, J.; Tanre, D.; Thieuleux, F.
2013-01-01
The seasonal and spatial varability of Aerosol Above Cloud (AAC) properties are derived from passive satellite data for the year 2008. A significant amount of aerosols are transported above liquid water clouds on the global scale. For particles in the fine mode (i.e., radius smaller than 0.3 m), including both clear sky and AAC retrievals increases the global mean aerosol optical thickness by 25(+/- 6%). The two main regions with man-made AAC are the tropical Southeast Atlantic, for biomass burning aerosols, and the North Pacific, mainly for pollutants. Man-made AAC are also detected over the Arctic during the spring. Mineral dust particles are detected above clouds within the so-called dust belt region (5-40 N). AAC may cause a warming effect and bias the retrieval of the cloud properties. This study will then help to better quantify the impacts of aerosols on clouds and climate.
Concept, Simulation, and Instrumentation for Radiometric Inflight Icing Detection
NASA Technical Reports Server (NTRS)
Ryerson, Charles; Koenig, George G.; Reehorst, Andrew L.; Scott, Forrest R.
2009-01-01
The multi-agency Flight in Icing Remote Sensing Team (FIRST), a consortium of the National Aeronautics and Space Administration (NASA), the Federal Aviation Administration (FAA), the National Center for Atmospheric Research (NCAR), the National Oceanographic and Atmospheric Administration (NOAA), and the Army Corps of Engineers (USACE), has developed technologies for remotely detecting hazardous inflight icing conditions. The USACE Cold Regions Research and Engineering Laboratory (CRREL) assessed the potential of onboard passive microwave radiometers for remotely detecting icing conditions ahead of aircraft. The dual wavelength system differences the brightness temperature of Space and clouds, with greater differences potentially indicating closer and higher magnitude cloud liquid water content (LWC). The Air Force RADiative TRANsfer model (RADTRAN) was enhanced to assess the flight track sensing concept, and a 'flying' RADTRAN was developed to simulate a radiometer system flying through simulated clouds. Neural network techniques were developed to invert brightness temperatures and obtain integrated cloud liquid water. In addition, a dual wavelength Direct-Detection Polarimeter Radiometer (DDPR) system was built for detecting hazardous drizzle drops. This paper reviews technology development to date and addresses initial polarimeter performance.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
NASA Astrophysics Data System (ADS)
Lee, Kyeong-sang; Choi, Sungwon; Seo, Minji; Lee, Chang suk; Seong, Noh-hun; Han, Kyung-Soo
2016-10-01
Snow cover is biggest single component of cryosphere. The Snow is covering the ground in the Northern Hemisphere approximately 50% in winter season and is one of climate factors that affects Earth's energy budget because it has higher reflectance than other land types. Also, snow cover has an important role about hydrological modeling and water resource management. For this reason, accurate detection of snow cover acts as an essential element for regional water resource management. Snow cover detection using satellite-based data have some advantages such as obtaining wide spatial range data and time-series observations periodically. In the case of snow cover detection using satellite data, the discrimination of snow and cloud is very important. Typically, Misclassified cloud and snow pixel can lead directly to error factor for retrieval of satellite-based surface products. However, classification of snow and cloud is difficult because cloud and snow have similar optical characteristics and are composed of water or ice. But cloud and snow has different reflectance in 1.5 1.7 μm wavelength because cloud has lower grain size and moisture content than snow. So, cloud and snow shows difference reflectance patterns change according to wavelength. Therefore, in this study, we perform algorithm for classifying snow cover and cloud with satellite-based data using Dynamic Time Warping (DTW) method which is one of commonly used pattern analysis such as speech and fingerprint recognitions and reflectance spectral library of snow and cloud. Reflectance spectral library is constructed in advance using MOD21km (MODIS Level1 swath 1km) data that their reflectance is six channels including 3 (0.466μm), 4 (0.554μm), 1 (0.647μm), 2 (0.857μm), 26 (1.382μm) and 6 (1.629μm). We validate our result using MODIS RGB image and MOD10 L2 swath (MODIS swath snow cover product). And we use PA (Producer's Accuracy), UA (User's Accuracy) and CI (Comparison Index) as validation criteria. The result of our study detect as snow cover in the several regions which are did not detected as snow in MOD10 L2 and detected as snow cover in MODIS RGB image. The result of our study can improve accuracy of other surface product such as land surface reflectance and land surface emissivity. Also it can use input data of hydrological modeling.
A robust threshold-based cloud mask for the HRV channel of MSG SEVIRI
NASA Astrophysics Data System (ADS)
Bley, S.; Deneke, H.
2013-03-01
A robust threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the METEOSAT SEVIRI instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures which cannot be detected by the low resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behaviour for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test dataset depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as estimate of cloud fraction.
Temporally rendered automatic cloud extraction (TRACE) system
NASA Astrophysics Data System (ADS)
Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.
1999-10-01
Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.
Inhomogeneous models of the Venus clouds containing sulfur
NASA Technical Reports Server (NTRS)
Smith, S. M.; Pollack, J. B.; Giver, L. P.; Cuzzi, J. N.; Podolak, M.
1979-01-01
Based on the suggestion that elemental sulfur is responsible for the yellow color of Venus, calculations are compared at 3.4 microns of the reflectivity phase function of two sulfur containing inhomogeneous cloud models with that of a homogeneous model. Assuming reflectivity observations with 25% or less total error, comparison of the model calculations leads to a minimum detectable mass of sulfur equal to 7% of the mass of sulfuric acid for the inhomogeneous drop model. For the inhomogeneous cloud model the comparison leads to a minimum detectable mass of sulfur between 17% and 38% of the mass of the acid drops, depending upon the actual size of the large particles. It is concluded that moderately accurate 3.4 microns reflectivity observations are capable of detecting quite small amounts of elemental sulfur at the top of the Venus clouds.
Automated Content Detection for Cassini Images
NASA Astrophysics Data System (ADS)
Stanboli, A.; Bue, B.; Wagstaff, K.; Altinok, A.
2017-06-01
NASA missions generate numerous images ever organized in increasingly large archives. Image archives are currently not searchable by image content. We present an automated content detection prototype that can enable content search.
Cirrus cloud retrieval from MSG/SEVIRI during day and night using artificial neural networks
NASA Astrophysics Data System (ADS)
Strandgren, Johan; Bugliaro, Luca
2017-04-01
By covering a large part of the Earth, cirrus clouds play an important role in climate as they reflect incoming solar radiation and absorb outgoing thermal radiation. Nevertheless, the cirrus clouds remain one of the largest uncertainties in atmospheric research and the understanding of the physical processes that govern their life cycle is still poorly understood, as is their representation in climate models. To monitor and better understand the properties and physical processes of cirrus clouds, it's essential that those tenuous clouds can be observed from geostationary spaceborne imagers like SEVIRI (Spinning Enhanced Visible and InfraRed Imager), that possess a high temporal resolution together with a large field of view and play an important role besides in-situ observations for the investigation of cirrus cloud processes. CiPS (Cirrus Properties from Seviri) is a new algorithm targeting thin cirrus clouds. CiPS is an artificial neural network trained with coincident SEVIRI and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) observations in order to retrieve a cirrus cloud mask along with the cloud top height (CTH), ice optical thickness (IOT) and ice water path (IWP) from SEVIRI. By utilizing only the thermal/IR channels of SEVIRI, CiPS can be used during day and night making it a powerful tool for the cirrus life cycle analysis. Despite the great challenge of detecting thin cirrus clouds and retrieving their properties from a geostationary imager using only the thermal/IR wavelengths, CiPS performs well. Among the cirrus clouds detected by CALIOP, CiPS detects 70 and 95 % of the clouds with an optical thickness of 0.1 and 1.0 respectively. Among the cirrus free pixels, CiPS classify 96 % correctly. For the CTH retrieval, CiPS has a mean absolute percentage error of 10 % or less with respect to CALIOP for cirrus clouds with a CTH greater than 8 km. For the IOT retrieval, CiPS has a mean absolute percentage error of 100 % or less with respect to CALIOP for cirrus clouds with an optical thickness down to 0.07. For such thin cirrus clouds an error of 100 % should be regarded as low from a geostationary imager like SEVIRI. The IWP retrieved by CiPS shows a similar performance, but has larger deviations for the thinner cirrus clouds.
Comparative Study of Aerosol and Cloud Detected by CALIPSO and OMI
NASA Technical Reports Server (NTRS)
Chen, Zhong; Torres, Omar; McCormick, M. Patrick; Smith, William; Ahn, Changwoo
2012-01-01
The Ozone Monitoring Instrument (OMI) on the Aura Satellite detects the presence of desert dust and smoke particles (also known as aerosols) in terms of a parameter known as the UV Aerosol Index (UV AI). The Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission measures the vertical distribution of aerosols and clouds. Aerosols and clouds play important roles in the atmosphere and climate system. Accurately detecting their presence, altitude, and properties using satellite radiance measurements is a very important task. This paper presents a comparative analysis of the CALIPSO Version 2 Vertical Feature Mask (VFM) product with the (OMI) UV Aerosol Index (UV AI) and reflectivity datasets for a full year of 2007. The comparison is done at regional and global scales. Based on CALIPSO arid OMI observations, the vertical and horizontal extent of clouds and aerosols are determined and the effects of aerosol type selection, load, cloud fraction on aerosol identification are discussed. It was found that the spatial-temporal correlation found between CALIPSO and OMI observations, is strongly dependent on aerosol types and cloud contamination. CALIPSO is more sensitivity to cloud and often misidentifies desert dust aerosols as cloud, while some small scale aerosol layers as well as some pollution aerosols are unidentified by OMI UV AI. Large differences in aerosol distribution patterns between CALIPSO and OMI are observed, especially for the smoke and pollution aerosol dominated areas. In addition, the results found a significant correlation between CALIPSO lidar 1064 nm backscatter and the OMI UV AI over the study regions.
A Bispectral Composite Threshold Approach for Automatic Cloud Detection in VIIRS Imagery
NASA Technical Reports Server (NTRS)
LaFontaine Frank J.; Jedlovec, Gary J.
2015-01-01
The detection of clouds in satellite imagery has a number of important applications in weather and climate studies. The presence of clouds can alter the energy budget of the Earth-atmosphere system through scattering and absorption of shortwave radiation and the absorption and re-emission of infrared radiation at longer wavelengths. The scattering and absorption characteristics of clouds vary with the microphysical properties of clouds, hence the cloud type. Thus, detecting the presence of clouds over a region in satellite imagery is important in order to derive atmospheric or surface parameters that give insight into weather and climate processes. For many applications however, clouds are a contaminant whose presence interferes with retrieving atmosphere or surface information. In these cases, is important to isolate cloud-free pixels, used to retrieve atmospheric thermodynamic information or surface geophysical parameters, from cloudy ones. This abstract describes an application of a two-channel bispectral composite threshold (BCT) approach applied to VIIRS imagery. The simplified BCT approach uses only the 10.76 and 3.75 micrometer spectral channels from VIIRS in two spectral tests; a straight-forward infrared threshold test with the longwave channel and a shortwave - longwave channel difference test. The key to the success of this approach as demonstrated in past applications to GOES and MODIS data is the generation of temporally and spatially dependent thresholds used in the tests from a previous number of days at similar observations to the current data. The paper and subsequent presentation will present an overview of the approach and intercomparison results with other satellites, methods, and against verification data.
Applications of 3D-EDGE Detection for ALS Point Cloud
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.
What Is an Automated External Defibrillator?
ANSWERS by heart Treatments + Tests What Is an Automated External Defibrillator? An automated external defibrillator (AED) is a lightweight, portable device ... ANSWERS by heart Treatments + Tests What Is an Automated External Defibrillator? detect a rhythm that should be ...
Cloud photogrammetry with dense stereo for fisheye cameras
NASA Astrophysics Data System (ADS)
Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens
2016-11-01
We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.
Optical cloud detection from a disposable airborne sensor
NASA Astrophysics Data System (ADS)
Nicoll, Keri; Harrison, R. Giles; Brus, David
2016-04-01
In-situ measurement of cloud droplet microphysical properties is most commonly made from manned aircraft platforms due to the size and weight of the instrumentation, which is both costly and typically limited to sampling only a few clouds. This work describes the development of a small, lightweight (<200g), disposable, optical cloud sensor which is designed for use on routine radiosonde balloon flights and also small unmanned aerial vehicle (UAV) platforms. The sensor employs the backscatter principle, using an ultra-bright LED as the illumination source, with a photodiode detector. Scattering of the LED light by cloud droplets generates a small optical signal which is separated from background light fluctuations using a lock-in technique. The signal to noise obtained permits cloud detection using the scattered LED light, even in daytime. During recent field tests in Pallas, Finland, the retrieved optical sensor signal has been compared with the DMT Cloud and Aerosol Spectrometer (CAS) which measures cloud droplets in the size range from 0.5 to 50 microns. Both sensors were installed at the hill top observatory of Sammaltunturi during a field campaign in October and November 2015, which experienced long periods of immersion inside cloud. Preliminary analysis shows very good agreement between the CAPS and the disposable cloud sensor for cloud droplets >5micron effective diameter. Such data and calibration of the sensor will be discussed here, as will simultaneous balloon launches of the optical cloud sensor through the same cloud layers.
Cloud Detection Using Measured and Modeled State Parameters
NASA Technical Reports Server (NTRS)
Yi, Y.; Minnis, P.; Huang, J.; Ayers, J. K.; Doelling, D. R.; Khaiyer, M. M.; Nordeen, M. L.
2004-01-01
In this study, hourly RUC analyses were used to examine the differences between RH and temperature values from RUC reanalysis data and from radiosonde atmospheric profiles obtained at the ARM SCF. The results show that the temperature observations from the SONDE and RUC are highly correlated. The RHs are also well-correlated, but the SONDE values generally exceed those from RUC. Inside cloud layers, the RH from RUC is 2-14% lower than the RH from SONDE for all RUC layers. Although the layer mean RH within clouds is much greater than the layer mean RH outside cloud or in the clear-sky, RH thresholds chosen as a function of temperature can more accurately diagnose cloud occurrence for either dataset. For overcast clouds, it was found that the 50% probability RH threshold for diagnosing a cloud, within a given upper tropospheric layer is roughly 90% for the Vaisala RS80-15LH radisonde and 80% for RUC data. While for the partial cloud (cloud amount is less than 90%), the RH thresholds of SONDE are close to RUC for a given probability in upper tropospheric layers. The probabilities of detecting clouds at a given RH and temperature should be useful for a variety of application such as the development of new cloud parameterizations or for estimating the vertical profile of cloudiness underneath a given cloud observed from the satellite to construct a 3-D cloud data set for computing atmospheric radiative heating profiles or determining potential aircraft icing conditions.
NASA Technical Reports Server (NTRS)
Smyth, W. H.
1978-01-01
Results show that Amalthea is likely to form a tightly-bound partial toroidal-shaped hydrogen cloud about its planet, while Ganymede, Callisto and Titan may have rather large, complete and nearly symmetric toroidal-shaped clouds. The toroidal cloud for Amalthea compares favorably with spacecraft data of Pioneer 10 for a satellite escape flux of order 10 to the 11th power atoms/sq cm/sec. Model results for Ganymede, Callisto and Titan suggest that these extended hydrogen atmospheres are likely to be detected by the Voyager spacecrafts and that Titan's cloud might also be detected by the Pioneer 11 spacecraft. Ions created because of atoms lost through ionization processes from these four extended hydrogen atmospheres and from the sodium cloud of Io are discussed.
Generating a Magellanic star cluster catalog with ASteCA
NASA Astrophysics Data System (ADS)
Perren, G. I.; Piatti, A. E.; Vázquez, R. A.
2016-08-01
An increasing number of software tools have been employed in the recent years for the automated or semi-automated processing of astronomical data. The main advantages of using these tools over a standard by-eye analysis include: speed (particularly for large databases), homogeneity, reproducibility, and precision. At the same time, they enable a statistically correct study of the uncertainties associated with the analysis, in contrast with manually set errors, or the still widespread practice of simply not assigning errors. We present a catalog comprising 210 star clusters located in the Large and Small Magellanic Clouds, observed with Washington photometry. Their fundamental parameters were estimated through an homogeneous, automatized and completely unassisted process, via the Automated Stellar Cluster Analysis package ( ASteCA). Our results are compared with two types of studies on these clusters: one where the photometry is the same, and another where the photometric system is different than that employed by ASteCA.
Assessment of Cloud Screening with Apparent Surface Reflectance in Support of the ICESat-2 Mission
NASA Technical Reports Server (NTRS)
Yang, Yuekui; Marshak, Alexander; Palm, Stephen P.; Wang, Zhuosen; Schaaf, Crystal
2011-01-01
The separation of cloud and clear scenes is usually one of the first steps in satellite data analysis. Before deriving a geophysical product, almost every satellite mission requires a cloud mask to label a scene as either clear or cloudy through a cloud detection procedure. For clear scenes, products such as surface properties may be retrieved; for cloudy scenes, scientist can focus on studying the cloud properties. Hence the quality of cloud detection directly affects the quality of most satellite operational and research products. This is certainly true for the Ice, Cloud, and land Elevation Satellite-2 (lCESat-2), which is the successor to the ICESat-l. As a top priority mission, ICESat-2 will continue to provide measurements of ice sheets and sea ice elevation on a global scale. Studies have shown that clouds can significantly affect the accuracy of the retrieved results. For example, some of the photons (a photon is a basic unit of light) in the laser beam will be scattered by cloud particles on its way. So instead of traveling in a straight line, these photons are scattered sideways and have traveled a longer path. This will result in biases in ice sheet elevation measurements. Hence cloud screening must be done and be done accurately before the retrievals.
NASA Astrophysics Data System (ADS)
Ikegawa, Shinichi; Horinouchi, Takeshi
2016-06-01
Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.
Chen, Shang-Liang; Chen, Yun-Yao; Hsu, Chiang
2014-01-01
Cloud computing is changing the ways software is developed and managed in enterprises, which is changing the way of doing business in that dynamically scalable and virtualized resources are regarded as services over the Internet. Traditional manufacturing systems such as supply chain management (SCM), customer relationship management (CRM), and enterprise resource planning (ERP) are often developed case by case. However, effective collaboration between different systems, platforms, programming languages, and interfaces has been suggested by researchers. In cloud-computing-based systems, distributed resources are encapsulated into cloud services and centrally managed, which allows high automation, flexibility, fast provision, and ease of integration at low cost. The integration between physical resources and cloud services can be improved by combining Internet of things (IoT) technology and Software-as-a-Service (SaaS) technology. This study proposes a new approach for developing cloud-based manufacturing systems based on a four-layer SaaS model. There are three main contributions of this paper: (1) enterprises can develop their own cloud-based logistic management information systems based on the approach proposed in this paper; (2) a case study based on literature reviews with experimental results is proposed to verify that the system performance is remarkable; (3) challenges encountered and feedback collected from T Company in the case study are discussed in this paper for the purpose of enterprise deployment. PMID:24686728
Chen, Shang-Liang; Chen, Yun-Yao; Hsu, Chiang
2014-03-28
Cloud computing is changing the ways software is developed and managed in enterprises, which is changing the way of doing business in that dynamically scalable and virtualized resources are regarded as services over the Internet. Traditional manufacturing systems such as supply chain management (SCM), customer relationship management (CRM), and enterprise resource planning (ERP) are often developed case by case. However, effective collaboration between different systems, platforms, programming languages, and interfaces has been suggested by researchers. In cloud-computing-based systems, distributed resources are encapsulated into cloud services and centrally managed, which allows high automation, flexibility, fast provision, and ease of integration at low cost. The integration between physical resources and cloud services can be improved by combining Internet of things (IoT) technology and Software-as-a-Service (SaaS) technology. This study proposes a new approach for developing cloud-based manufacturing systems based on a four-layer SaaS model. There are three main contributions of this paper: (1) enterprises can develop their own cloud-based logistic management information systems based on the approach proposed in this paper; (2) a case study based on literature reviews with experimental results is proposed to verify that the system performance is remarkable; (3) challenges encountered and feedback collected from T Company in the case study are discussed in this paper for the purpose of enterprise deployment.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
NASA Technical Reports Server (NTRS)
Morgan, E. L.; Young, R. C.; Smith, M. D.; Eagleson, K. W.
1986-01-01
The objective of this study was to evaluate proposed design characteristics and applications of automated biomonitoring devices for real-time toxicity detection in water quality control on-board permanent space stations. Simulated tests in downlinking transmissions of automated biomonitoring data to Earth-receiving stations were simulated using satellite data transmissions from remote Earth-based stations.
Jones, Gillian; Matthews, Roger; Cunningham, Richard; Jenks, Peter
2011-07-01
The sensitivity of automated culture of Staphylococcus aureus from flocked swabs versus that of manual culture of fiber swabs was prospectively compared using nasal swabs from 867 patients. Automated culture from flocked swabs significantly increased the detection rate, by 13.1% for direct culture and 10.2% for enrichment culture.
Analysis of Co-Located MODIS and CALIPSO Observations Near Clouds
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander
2011-01-01
The purpose of this paper is to help researchers combine data from different satellites and thus gain new insights into two critical yet poorly understood aspects of anthropogenic climate change, aerosol-cloud interactions and aerosol radiative effects, For this, the paper explores whether cloud information from the Aqua satellite's MODIS instrument can help characterize systematic aerosol changes near clouds by refining earlier perceptions of these changes that were based on the CALIPSO satellite's CALIOP instrument. Similar to a radar but using visible and ncar-infrared light, CALIOP sends out laser pulses and provides aerosol and cloud information along a single line that tracks the satellite orbit by measuring the reflection of its pulses. In contrast, MODIS takes images of reflected sunlight and emitted infrared radiation at several wavelengths, and covers wide areas around the satellite track. This paper analyzes a year-long global dataset covering all ice-free oceans, and finds that MODIS can greatly help the interpretation of CALIOP observations, especially by detecting clouds that lie outside the line observed by CALlPSO. The paper also finds that complications such as differences in view direction or clouds drifting in the 72 seconds that elapse between MODIS and CALIOP observations have only a minor impact. The study also finds that MODIS data helps refine but does not qualitatively alter perceptions of the systematic aerosol changes that were detected in earlier studies using only CALIOP data. It then proposes a statistical approach to account for clouds lying outside the CALIOP track even when MODIS cannot as reliably detect low clouds, for example at night or over ice. Finally, the paper finds that, because of variations in cloud amount and type, the typical distance to clouds in maritime clear areas varies with season and location. The overall median distance to clouds in maritime clear areas around 4-5 km. The fact that half of all clear areas is closer than 5 km to clouds implies that pronounced near-cloud changes in aerosol properties have significant implications for overall clear-sky characteristics, including the radiative impact of aerosols.
An Automated Detection System for Microaneurysms That Is Effective across Different Racial Groups.
Saleh, George Michael; Wawrzynski, James; Caputo, Silvestro; Peto, Tunde; Al Turk, Lutfiah Ismail; Wang, Su; Hu, Yin; Da Cruz, Lyndon; Smith, Phil; Tang, Hongying Lilian
2016-01-01
Patients without diabetic retinopathy (DR) represent a large proportion of the caseload seen by the DR screening service so reliable recognition of the absence of DR in digital fundus images (DFIs) is a prime focus of automated DR screening research. We investigate the use of a novel automated DR detection algorithm to assess retinal DFIs for absence of DR. A retrospective, masked, and controlled image-based study was undertaken. 17,850 DFIs of patients from six different countries were assessed for DR by the automated system and by human graders. The system's performance was compared across DFIs from the different countries/racial groups. The sensitivities for detection of DR by the automated system were Kenya 92.8%, Botswana 90.1%, Norway 93.5%, Mongolia 91.3%, China 91.9%, and UK 90.1%. The specificities were Kenya 82.7%, Botswana 83.2%, Norway 81.3%, Mongolia 82.5%, China 83.0%, and UK 79%. There was little variability in the calculated sensitivities and specificities across the six different countries involved in the study. These data suggest the possible scalability of an automated DR detection platform that enables rapid identification of patients without DR across a wide range of races.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykac, Deniz; Chaum, Edward; Fox, Karen
A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection,more » and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.« less
NASA Technical Reports Server (NTRS)
Platnick, S.; Wind, G.
2004-01-01
In order to perform satellite retrievals of cloud properties, it is important to account for the effect of the above-cloud atmosphere on the observations. The solar bands used in the operational MODIS Terra and Aqua cloud optical and microphysical algorithms (visible, NIR, and SWIR spectral windows) are primarily affected by water vapor, and to a lesser extent by well-mixed gases. For water vapor, the above-cloud column amount, or precipitable water, provides adequate information for an atmospheric correction; details of the vertical vapor distribution are not typically necessary for the level of correction required. Cloud-top pressure has a secondary effect due to pressure broadening influences. For well- mixed gases, cloud-top pressure is also required for estimates of above-cloud abundances. We present a method for obtaining above-cloud precipitable water over dark Ocean surfaces using the MODIS 0.94 pm vapor absorption band. The retrieval includes an iterative procedure for establishing cloud-top temperature and pressure, and is useful for both single layer water and ice clouds. Knowledge of cloud thermodynamic phase is fundamental in retrieving cloud optical and microphysical properties. However, in cases of optically thin cirrus overlapping lower water clouds, the concept of a single unique phase is ill- defined and depends, at least, on the spectral region of interest. We will present a method for multi-layer and multi-phase cloud detection which uses above-cloud precipitable water retrievals along with several existing MODIS operational cloud products (cloud-top pressure derived from a C02 slicing algorithm, IR and SWIR phase retrievals). Results are catagorized by whether the radiative signature in the MODIS solar bands is primarily that of a water cloud with ice cloud contamination, or visa-versa. Examples in polar and mid-latitude regions will be shown.
Automated detection of retinal disease.
Helmchen, Lorens A; Lehmann, Harold P; Abràmoff, Michael D
2014-11-01
Nearly 4 in 10 Americans with diabetes currently fail to undergo recommended annual retinal exams, resulting in tens of thousands of cases of blindness that could have been prevented. Advances in automated retinal disease detection could greatly reduce the burden of labor-intensive dilated retinal examinations by ophthalmologists and optometrists and deliver diagnostic services at lower cost. As the current availability of ophthalmologists and optometrists is inadequate to screen all patients at risk every year, automated screening systems deployed in primary care settings and even in patients' homes could fill the current gap in supply. Expanding screens to all patients at risk by switching to automated detection systems would in turn yield significantly higher rates of detecting and treating diabetic retinopathy per dilated retinal examination. Fewer diabetic patients would develop complications such as blindness, while ophthalmologists could focus on more complex cases.
Automated detection of a prostate Ni-Ti stent in electronic portal images.
Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer
2006-12-01
Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.
Properties of CIRRUS Overlapping Clouds as Deduced from the GOES-12 Imagery Data
NASA Technical Reports Server (NTRS)
Chang, Fu-Lung; Minnis, Patrick; Lin, Bing; Sun-Mack, Sunny; Khaiyer, Mandana
2006-01-01
Understanding the impact of cirrus clouds on modifying both the solar reflected and terrestrial emitted radiations is crucial for climate studies. Unlike most boundary layer stratus and stratocumulus clouds that have a net cooling effect on the climate, high-level thin cirrus clouds can have a warming effect on our climate. Many research efforts have been devoted to retrieving cirrus cloud properties due to their ubiquitous presence. However, using satellite observations to detect and/or retrieve cirrus cloud properties faces two major challenges. First, they are often semitransparent at visible to infrared wavelengths; and secondly, they often occur over a lower cloud system. The overlapping of high-level cirrus and low-level stratus cloud poses a difficulty in determining the individual cloud top altitudes and optical properties, especially when the signals from cirrus clouds are overwhelmed by the signals of stratus clouds. Moreover, the operational satellite retrieval algorithms, which often assume only single layer cloud in the development of cloud retrieval techniques, cannot resolve the cloud overlapping situation properly. The new geostationary satellites, starting with the Twelfth Geostationary Operational Environmental Satellite (GOES-12), are providing a new suite of imager bands that have replaced the conventional 12-micron channel with a 13.3-micron CO2 absorption channel. The replacement of the 13.3-micron channel allows for the application of a CO2-slicing retrieval technique (Chahine et al. 1974; Smith and Platt 1978), which is one of the important passive satellite methods for remote sensing the altitudes of mid to high-level clouds. Using the CO2- slicing technique is more effective in detecting semitransparent cirrus clouds than using the conventional infrared-window method.
NASA Technical Reports Server (NTRS)
Coddington, O. M.; Pilewskie, P.; Redemann, J.; Platnick, S.; Russell, P. B.; Schmidt, K. S.; Gore, W. J.; Livingston, J.; Wind, G.; Vukicevic, T.
2010-01-01
Haywood et al. (2004) show that an aerosol layer above a cloud can cause a bias in the retrieved cloud optical thickness and effective radius. Monitoring for this potential bias is difficult because space ]based passive remote sensing cannot unambiguously detect or characterize aerosol above cloud. We show that cloud retrievals from aircraft measurements above cloud and below an overlying aerosol layer are a means to test this bias. The data were collected during the Intercontinental Chemical Transport Experiment (INTEX-A) study based out of Portsmouth, New Hampshire, United States, above extensive, marine stratus cloud banks affected by industrial outflow. Solar Spectral Flux Radiometer (SSFR) irradiance measurements taken along a lower level flight leg above cloud and below aerosol were unaffected by the overlying aerosol. Along upper level flight legs, the irradiance reflected from cloud top was transmitted through an aerosol layer. We compare SSFR cloud retrievals from below ]aerosol legs to satellite retrievals from the Moderate Resolution Imaging Spectroradiometer (MODIS) in order to detect an aerosol ]induced bias. In regions of small variation in cloud properties, we find that SSFR and MODIS-retrieved cloud optical thickness compares within the uncertainty range for each instrument while SSFR effective radius tend to be smaller than MODIS values (by 1-2 microns) and at the low end of MODIS uncertainty estimates. In regions of large variation in cloud properties, differences in SSFR and MODIS ]retrieved cloud optical thickness and effective radius can reach values of 10 and 10 microns, respectively. We include aerosols in forward modeling to test the sensitivity of SSFR cloud retrievals to overlying aerosol layers. We find an overlying absorbing aerosol layer biases SSFR cloud retrievals to smaller effective radii and optical thickness while nonabsorbing aerosols had no impact.
Szyrkowiec, Thomas; Autenrieth, Achim; Gunning, Paul; Wright, Paul; Lord, Andrew; Elbers, Jörg-Peter; Lumb, Alan
2014-02-10
For the first time, we demonstrate the orchestration of elastic datacenter and inter-datacenter transport network resources using a combination of OpenStack and OpenFlow. Programmatic control allows a datacenter operator to dynamically request optical lightpaths from a transport network operator to accommodate rapid changes of inter-datacenter workflows.
Development of AN All-Purpose Free Photogrammetric Tool
NASA Astrophysics Data System (ADS)
González-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Guerrero, D.; Hernandez-Lopez, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; Gaiani, M.
2016-06-01
Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.
Automated Classification of Heritage Buildings for As-Built Bim Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Bassier, M.; Vergauwen, M.; Van Genechten, B.
2017-08-01
Semantically rich three dimensional models such as Building Information Models (BIMs) are increasingly used in digital heritage. They provide the required information to varying stakeholders during the different stages of the historic buildings life cyle which is crucial in the conservation process. The creation of as-built BIM models is based on point cloud data. However, manually interpreting this data is labour intensive and often leads to misinterpretations. By automatically classifying the point cloud, the information can be proccesed more effeciently. A key aspect in this automated scan-to-BIM process is the classification of building objects. In this research we look to automatically recognise elements in existing buildings to create compact semantic information models. Our algorithm efficiently extracts the main structural components such as floors, ceilings, roofs, walls and beams despite the presence of significant clutter and occlusions. More specifically, Support Vector Machines (SVM) are proposed for the classification. The algorithm is evaluated using real data of a variety of existing buildings. The results prove that the used classifier recognizes the objects with both high precision and recall. As a result, entire data sets are reliably labelled at once. The approach enables experts to better document and process heritage assets.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
NASA Technical Reports Server (NTRS)
2002-01-01
These views of Hurricane Isidore were acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on September 20, 2002. After bringing large-scale flooding to western Cuba, Isidore was upgraded (on September 21) from a tropical storm to a category 3hurricane. Sweeping westward to Mexico's Yucatan Peninsula, the hurricane caused major destruction and left hundreds of thousands of people homeless. Although weakened after passing over the Yucatan landmass, Isidore regained strength as it moved northward over the Gulf of Mexico.
At left is a colorful visualization of cloud extent that superimposes MISR's radiometric camera-by-camera cloud mask (RCCM) over natural-color radiance imagery, both derived from data acquired with the instrument's vertical-viewing (nadir) camera. Using brightness and statistical metrics, the RCCM is one of several techniques MISR uses to determine whether an area is clear or cloudy. In this rendition, the RCCM has been color-coded, and purple = cloudy with high confidence, blue = cloudy with low confidence, green = clear with low confidence, and red = clear with high confidence.In addition to providing information on meteorological events, MISR's data products are designed to help improve our understanding of the influences of clouds on climate. Cloud heights and albedos are among the variables that govern these influences. (Albedo is the amount of sunlight reflected back to space divided by the amount of incident sunlight.) The center panel is the cloud-top height field retrieved using automated stereoscopic processing of data from multiple MISR cameras. Areas where heights could not be retrieved are shown in dark gray. In some areas, such as the southern portion of the image, the stereo retrieval was able to detect thin, high clouds that were not picked up by the RCCM's nadir view. Retrieved local albedo values for Isidore are shown at right. Generation of the albedo product is dependent upon observed cloud radiances as a function of viewing angle as well as the height field. Note that over the short distances (2.2 kilometers) that the local albedo product is generated, values can be greater than 1.0 due to contributions from cloud sides. Areas where albedo could not be retrieved are shown in dark gray.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 14669. The panels cover an area of about 380 kilometers x 704 kilometers, and utilize data from blocks 70 to 79within World Reference System-2 path 17.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.NASA Astrophysics Data System (ADS)
Kealy, John C.; Marenco, Franco; Marsham, John H.; Garcia-Carreras, Luis; Francis, Pete N.; Cooke, Michael C.; Hocking, James
2017-05-01
Novel methods of cloud detection are applied to airborne remote sensing observations from the unique Fennec aircraft dataset, to evaluate the Met Office-derived products on cloud properties over the Sahara based on the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) satellite. Two cloud mask configurations are considered, as well as the retrievals of cloud-top height (CTH), and these products are compared to airborne cloud remote sensing products acquired during the Fennec campaign in June 2011 and June 2012. Most detected clouds (67 % of the total) have a horizontal extent that is smaller than a SEVIRI pixel (3 km × 3 km). We show that, when partially cloud-contaminated pixels are included, a match between the SEVIRI and aircraft datasets is found in 80 ± 8 % of the pixels. Moreover, under clear skies the datasets are shown to agree for more than 90 % of the pixels. The mean cloud field, derived from the satellite cloud mask acquired during the Fennec flights, shows that areas of high surface albedo and orography are preferred sites for Saharan cloud cover, consistent with published theories. Cloud-top height retrievals however show large discrepancies over the region, which are ascribed to limiting factors such as the cloud horizontal extent, the derived effective cloud amount, and the absorption by mineral dust. The results of the CTH analysis presented here may also have further-reaching implications for the techniques employed by other satellite applications facilities across the world.
Automated detection of bacteria in urine
NASA Technical Reports Server (NTRS)
Fleig, A. J.; Picciolo, G. L.; Chappelle, E. W.; Kelbaugh, B. N.
1972-01-01
A method for detecting the presence of bacteria in urine was developed which utilizes the bioluminescent reaction of adenosine triphosphate with luciferin and luciferase derived from the tails of fireflies. The method was derived from work on extraterrestrial life detection. A device was developed which completely automates the assay process.
2014-01-01
Background Adverse drug reactions and adverse drug events (ADEs) are major public health issues. Many different prospective tools for the automated detection of ADEs in hospital databases have been developed and evaluated. The objective of the present study was to evaluate an automated method for the retrospective detection of ADEs with hyperkalaemia during inpatient stays. Methods We used a set of complex detection rules to take account of the patient’s clinical and biological context and the chronological relationship between the causes and the expected outcome. The dataset consisted of 3,444 inpatient stays in a French general hospital. An automated review was performed for all data and the results were compared with those of an expert chart review. The complex detection rules’ analytical quality was evaluated for ADEs. Results In terms of recall, 89.5% of ADEs with hyperkalaemia “with or without an abnormal symptom” were automatically identified (including all three serious ADEs). In terms of precision, 63.7% of the automatically identified ADEs with hyperkalaemia were true ADEs. Conclusions The use of context-sensitive rules appears to improve the automated detection of ADEs with hyperkalaemia. This type of tool may have an important role in pharmacoepidemiology via the routine analysis of large inter-hospital databases. PMID:25212108