Cardiovascular imaging environment: will the future be cloud-based?
Kawel-Boehm, Nadine; Bluemke, David A
2017-07-01
In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.
OpenID Connect as a security service in cloud-based medical imaging systems.
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-04-01
The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.
AIRS Subpixel Cloud Characterization Using MODIS Cloud Products.
NASA Astrophysics Data System (ADS)
Li, Jun; Menzel, W. Paul; Sun, Fengying; Schmit, Timothy J.; Gurka, James
2004-08-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) and the Atmospheric Infrared Sounder (AIRS) measurements from the Earth Observing System's (EOS's) Aqua satellite enable improved global monitoring of the distribution of clouds. MODIS is able to provide, at high spatial resolution (1 5 km), a cloud mask, surface and cloud types, cloud phase, cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), and cloud optical thickness (COT). AIRS is able to provide CTP, ECA, CPS, and COT at coarser spatial resolution (13.5 km at nadir) but with much better accuracy using its high-spectral-resolution measurements. The combined MODIS AIRS system offers the opportunity for improved cloud products over those possible from either system alone. The key steps for synergistic use of imager and sounder radiance measurements are 1) collocation in space and time and 2) imager cloud amount, type, and phase determination within the sounder pixel. The MODIS and AIRS measurements from the EOS Aqua satellite provide the opportunity to study the synergistic use of advanced imager and sounder measurements. As the first step, the MODIS classification procedure is applied to identify various surface and cloud types within an AIRS footprint. Cloud-layer information (lower, midlevel, or high clouds) and phase information (water, ice, or mixed-phase clouds) within the AIRS footprint are sorted and characterized using MODIS 1-km-spatial-resolution data. The combined MODIS and AIRS data for various scenes are analyzed to study the utility of the synergistic use of high-spatial-resolution imager products and high-spectral-resolution sounder radiance measurements. There is relevance to the optimal use of data from the Advanced Baseline Imager (ABI) and Hyperspectral Environmental Suite (HES) systems, which are to fly on the Geostationary Operational Environmental Satellite (GOES)-R.
Secure public cloud platform for medical images sharing.
Pan, Wei; Coatrieux, Gouenou; Bouslimi, Dalel; Prigent, Nicolas
2015-01-01
Cloud computing promises medical imaging services offering large storage and computing capabilities for limited costs. In this data outsourcing framework, one of the greatest issues to deal with is data security. To do so, we propose to secure a public cloud platform devoted to medical image sharing by defining and deploying a security policy so as to control various security mechanisms. This policy stands on a risk assessment we conducted so as to identify security objectives with a special interest for digital content protection. These objectives are addressed by means of different security mechanisms like access and usage control policy, partial-encryption and watermarking.
Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio
2017-03-06
In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user's home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered.
Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing
Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio
2017-01-01
In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user’s home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered. PMID:28272305
Automated cloud and shadow detection and filling using two-date Landsat imagery in the United States
Jin, Suming; Homer, Collin G.; Yang, Limin; Xian, George; Fry, Joyce; Danielson, Patrick; Townsend, Philip A.
2013-01-01
A simple, efficient, and practical approach for detecting cloud and shadow areas in satellite imagery and restoring them with clean pixel values has been developed. Cloud and shadow areas are detected using spectral information from the blue, shortwave infrared, and thermal infrared bands of Landsat Thematic Mapper or Enhanced Thematic Mapper Plus imagery from two dates (a target image and a reference image). These detected cloud and shadow areas are further refined using an integration process and a false shadow removal process according to the geometric relationship between cloud and shadow. Cloud and shadow filling is based on the concept of the Spectral Similarity Group (SSG), which uses the reference image to find similar alternative pixels in the target image to serve as replacement values for restored areas. Pixels are considered to belong to one SSG if the pixel values from Landsat bands 3, 4, and 5 in the reference image are within the same spectral ranges. This new approach was applied to five Landsat path/rows across different landscapes and seasons with various types of cloud patterns. Results show that almost all of the clouds were captured with minimal commission errors, and shadows were detected reasonably well. Among five test scenes, the lowest producer's accuracy of cloud detection was 93.9% and the lowest user's accuracy was 89%. The overall cloud and shadow detection accuracy ranged from 83.6% to 99.3%. The pixel-filling approach resulted in a new cloud-free image that appears seamless and spatially continuous despite differences in phenology between the target and reference images. Our methods offer a straightforward and robust approach for preparing images for the new 2011 National Land Cover Database production.
OpenID Connect as a security service in cloud-based medical imaging systems
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-01-01
Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682
A Local Index of Cloud Immersion in Tropical Forests Using Time-Lapse Photography
NASA Astrophysics Data System (ADS)
Bassiouni, M.; Scholl, M. A.
2015-12-01
Data on the frequency, duration and elevation of cloud immersion is essential to improve estimates of cloud water deposition in water budgets in cloud forests. Here, we present a methodology to detect local cloud immersion in remote tropical forests using time-lapse photography. A simple approach is developed to detect cloudy conditions in photographs within the canopy where image depth during clear conditions may be less than 10 meters and moving leaves and branches and changes in lighting are unpredictable. A primary innovation of this study is that cloudiness is determined from images without using a reference clear image and without minimal threshold value determination or human judgment for calibration. Five sites ranging from 600 to 1000 meters elevation along a ridge in the Luquillo Critical Zone Observatory, Puerto Rico were each equipped with a trail camera programmed to take an image every 30 minutes since March 2014. Images were classified using four selected cloud-sensitive image characteristics (SCICs) computed for small image regions: contrast, the coefficient of variation and the entropy of the luminance of each image pixel, and image colorfulness. K-means clustering provided reasonable results to discriminate cloudy from clear conditions. Preliminary results indicate that 79-94% (daytime) and 85-93% (nighttime) of validation images were classified accurately at one open and two closed canopy sites. The euclidian distances between SCICs vectors of images during cloudy conditions and the SCICs vector of the centroid of the cluster of clear images show potential to quantify cloud density in addition to immersion. The classification method will be applied to determine spatial and temporal patterns of cloud immersion in the study area. The presented approach offers promising applications to increase observations of low-lying clouds at remote mountain sites where standard instruments to measure visibility and cloud base may not be practical.
A cloud-based system for automatic glaucoma screening.
Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu
2015-08-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.
OpenID connect as a security service in Cloud-based diagnostic imaging systems
NASA Astrophysics Data System (ADS)
Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter
2015-03-01
The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.
Automated cloud classification using a ground based infra-red camera and texture analysis techniques
NASA Astrophysics Data System (ADS)
Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.
2013-10-01
Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.
Developing national on-line services to annotate and analyse underwater imagery in a research cloud
NASA Astrophysics Data System (ADS)
Proctor, R.; Langlois, T.; Friedman, A.; Davey, B.
2017-12-01
Fish image annotation data is currently collected by various research, management and academic institutions globally (+100,000's hours of deployments) with varying degrees of standardisation and limited formal collaboration or data synthesis. We present a case study of how national on-line services, developed within a domain-oriented research cloud, have been used to annotate habitat images and synthesise fish annotation data sets collected using Autonomous Underwater Vehicles (AUVs) and baited remote underwater stereo-video (stereo-BRUV). Two developing software tools have been brought together in the marine science cloud to provide marine biologists with a powerful service for image annotation. SQUIDLE+ is an online platform designed for exploration, management and annotation of georeferenced images & video data. It provides a flexible annotation framework allowing users to work with their preferred annotation schemes. We have used SQUIDLE+ to sample the habitat composition and complexity of images of the benthos collected using stereo-BRUV. GlobalArchive is designed to be a centralised repository of aquatic ecological survey data with design principles including ease of use, secure user access, flexible data import, and the collection of any sampling and image analysis information. To easily share and synthesise data we have implemented data sharing protocols, including Open Data and synthesis Collaborations, and a spatial map to explore global datasets and filter to create a synthesis. These tools in the science cloud, together with a virtual desktop analysis suite offering python and R environments offer an unprecedented capability to deliver marine biodiversity information of value to marine managers and scientists alike.
Schleeweis, Karen; Goward, Samuel N.; Huang, Chengquan; Dwyer, John L.; Dungan, Jennifer L.; Lindsey, Mary A.; Michaelis, Andrew; Rishmawi, Khaldoun; Masek, Jeffery G.
2016-01-01
Using the NASA Earth Exchange platform, the North American Forest Dynamics (NAFD) project mapped forest history wall-to-wall, annually for the contiguous US (1986–2010) using the Vegetation Change Tracker algorithm. As with any effort to identify real changes in remotely sensed time-series, data gaps, shifts in seasonality, misregistration, inconsistent radiometry and cloud contamination can be sources of error. We discuss the NAFD image selection and processing stream (NISPS) that was designed to minimize these sources of error. The NISPS image quality assessments highlighted issues with the Landsat archive and metadata including inadequate georegistration, unreliability of the pre-2009 L5 cloud cover assessments algorithm, missing growing-season imagery and paucity of clear views. Assessment maps of Landsat 5–7 image quantities and qualities are presented that offer novel perspectives on the growing-season archive considered for this study. Over 150,000+ Landsat images were considered for the NAFD project. Optimally, one high quality cloud-free image in each year or a total of 12,152 images would be used. However, to accommodate data gaps and cloud/shadow contamination 23,338 images were needed. In 220 specific path-row image years no acceptable images were found resulting in data gaps in the annual national map products.
Silva, Luís A Bastião; Costa, Carlos; Oliveira, José Luis
2013-05-01
Healthcare institutions worldwide have adopted picture archiving and communication system (PACS) for enterprise access to images, relying on Digital Imaging Communication in Medicine (DICOM) standards for data exchange. However, communication over a wider domain of independent medical institutions is not well standardized. A DICOM-compliant bridge was developed for extending and sharing DICOM services across healthcare institutions without requiring complex network setups or dedicated communication channels. A set of DICOM routers interconnected through a public cloud infrastructure was implemented to support medical image exchange among institutions. Despite the advantages of cloud computing, new challenges were encountered regarding data privacy, particularly when medical data are transmitted over different domains. To address this issue, a solution was introduced by creating a ciphered data channel between the entities sharing DICOM services. Two main DICOM services were implemented in the bridge: Storage and Query/Retrieve. The performance measures demonstrated it is quite simple to exchange information and processes between several institutions. The solution can be integrated with any currently installed PACS-DICOM infrastructure. This method works transparently with well-known cloud service providers. Cloud computing was introduced to augment enterprise PACS by providing standard medical imaging services across different institutions, offering communication privacy and enabling creation of wider PACS scenarios with suitable technical solutions.
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images
NASA Technical Reports Server (NTRS)
Knight, David; Powell, Mark
2013-01-01
Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).
NASA Technical Reports Server (NTRS)
Bedka, Kristopher M.; Dworak, Richard; Brunner, Jason; Feltz, Wayne
2012-01-01
Two satellite infrared-based overshooting convective cloud-top (OT) detection methods have recently been described in the literature: 1) the 11-mm infrared window channel texture (IRW texture) method, which uses IRW channel brightness temperature (BT) spatial gradients and thresholds, and 2) the water vapor minus IRW BT difference (WV-IRW BTD). While both methods show good performance in published case study examples, it is important to quantitatively validate these methods relative to overshooting top events across the globe. Unfortunately, no overshooting top database currently exists that could be used in such study. This study examines National Aeronautics and Space Administration CloudSat Cloud Profiling Radar data to develop an OT detection validation database that is used to evaluate the IRW-texture and WV-IRW BTD OT detection methods. CloudSat data were manually examined over a 1.5-yr period to identify cases in which the cloud top penetrates above the tropopause height defined by a numerical weather prediction model and the surrounding cirrus anvil cloud top, producing 111 confirmed overshooting top events. When applied to Moderate Resolution Imaging Spectroradiometer (MODIS)-based Geostationary Operational Environmental Satellite-R Series (GOES-R) Advanced Baseline Imager proxy data, the IRW-texture (WV-IRW BTD) method offered a 76% (96%) probability of OT detection (POD) and 16% (81%) false-alarm ratio. Case study examples show that WV-IRW BTD.0 K identifies much of the deep convective cloud top, while the IRW-texture method focuses only on regions with a spatial scale near that of commonly observed OTs. The POD decreases by 20% when IRW-texture is applied to current geostationary imager data, highlighting the importance of imager spatial resolution for observing and detecting OT regions.
Junocam: Juno's Outreach Camera
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.
2017-11-01
Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Colors of Alien Worlds from Direct Imaging Exoplanet Missions
NASA Astrophysics Data System (ADS)
Hu, Renyu
2016-01-01
Future direct-imaging exoplanet missions such as WFIRST will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These "cold" exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Liebeskind, David S
2016-01-01
Crowdsourcing, an unorthodox approach in medicine, creates an unusual paradigm to study precision cerebrovascular health, eliminating the relative isolation and non-standardized nature of current imaging data infrastructure, while shifting emphasis to the astounding capacity of big data in the cloud. This perspective envisions the use of imaging data of the brain and vessels to orient and seed A Million Brains Initiative™ that may leapfrog incremental advances in stroke and rapidly provide useful data to the sizable population around the globe prone to the devastating effects of stroke and vascular substrates of dementia. Despite such variability in the type of data available and other limitations, the data hierarchy logically starts with imaging and can be enriched with almost endless types and amounts of other clinical and biological data. Crowdsourcing allows an individual to contribute to aggregated data on a population, while preserving their right to specific information about their own brain health. The cloud now offers endless storage, computing prowess, and neuroimaging applications for postprocessing that is searchable and scalable. Collective expertise is a windfall of the crowd in the cloud and particularly valuable in an area such as cerebrovascular health. The rise of precision medicine, rapidly evolving technological capabilities of cloud computing and the global imperative to limit the public health impact of cerebrovascular disease converge in the imaging of A Million Brains Initiative™. Crowdsourcing secure data on brain health may provide ultimate generalizability, enable focused analyses, facilitate clinical practice, and accelerate research efforts.
Improvements in Cloud Remote Sensing from Fusing VIIRS and CrIS data
NASA Astrophysics Data System (ADS)
Heidinger, A. K.; Walther, A.; Lindsey, D. T.; Li, Y.; NOH, Y. J.; Botambekov, D.; Miller, S. D.; Foster, M. J.
2016-12-01
In the fall of 2016, NOAA began the operational production of cloud products from the S-NPP Visible and Infrared Imaging Radiometer Suite (VIIRS) using the NOAA Enterprise Algorithms. VIIRS, while providing unprecedented spatial resolution and imaging clarity, does lack certain IR channels that are beneficial to cloud remote sensing. At the UW Space Science and Engineering Center (SSEC), tools were written to generate the missing IR channels from the Cross Track Infrared Sounder (CrIS) and to map them into the VIIRS swath. The NOAA Enterprise Algorithms are also implemented into the NESDIS CLAVR-x system. CLAVR-x has been modified to use the fused VIIRS and CrIS data. This presentation will highlight the benefits offered by the CrIS data to the NOAA Enterprise Algorithms. In addition, these benefits also have enabled the generation of 3D cloud retrievals to support the request from the National Weather Service (NWS) for a Cloud Cover Layers product. Lastly, the benefits of using VIIRS and CrIS for achieving consistency with GOES-R will also be demonstrated.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
Colors of Alien Worlds from Direct Imaging Exoplanet Missions
NASA Astrophysics Data System (ADS)
Hu, Renyu
2015-08-01
Future direct-imaging exoplanet missions such as WFIRST/AFTA, Exo-C, and Exo-S will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These “cold” exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.
Towards a true aerosol-and-cloud retrieval scheme
NASA Astrophysics Data System (ADS)
Thomas, Gareth; Poulsen, Caroline; Povey, Adam; McGarragh, Greg; Jerg, Matthias; Siddans, Richard; Grainger, Don
2014-05-01
The Optimal Retrieval of Aerosol and Cloud (ORAC) - formally the Oxford-RAL Aerosol and Cloud retrieval - offers a framework that can provide consistent and well characterised properties of both aerosols and clouds from a range of imaging satellite instruments. Several practical issues stand in the way of achieving the potential of this combined scheme however; in particular the sometimes conflicting priorities and requirements of aerosol and cloud retrieval problems, and the question of the unambiguous identification of aerosol and cloud pixels. This presentation will present recent developments made to the ORAC scheme for both aerosol and cloud, and detail how these are being integrated into a single retrieval framework. The implementation of a probabilistic method for pixel identification will also be presented, for both cloud detection and aerosol/cloud type selection. The method is based on Bayesian methods applied the optimal estimation retrieval output of ORAC and is particularly aimed at providing additional information in the so-called "twilight zone", where pixels can't be unambiguously identified as either aerosol or cloud and traditional cloud or aerosol products do not provide results.
NASA Astrophysics Data System (ADS)
Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.
2015-12-01
According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Ravine, M. A.; Caplinger, M. A.; Orton, G. S.; Ingersoll, A. P.; Jensen, E.; Lipkaman, L.; Krysak, D.; Zimdar, R.; Bolton, S. J.
2016-12-01
JunoCam is a visible imager on the Juno spacecraft in orbit around Jupiter. It is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm, designed for optimal imaging of Jupiter's poles. Juno's elliptical polar orbit will offer unique views of Jupiter's polar regions with a spatial scale of 50 km/pixel. At closest approach the images will have a spatial scale of 3 km/pixel. As a push-frame imager on a rotating spacecraft, JunoCam uses time-delayed integration to take advantage of the spacecraft spin to extend integration time to increase signal. Images of Jupiter's poles reveal a largely uncharted region of Jupiter, as nearly all earlier spacecraft have orbited or flown by in the equatorial plane. Most of the images of Jupiter will be acquired in the +/-2 hours surrounding closest approach. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color images of the aurora will be acquired if detectable. Stereo images and images taken with the methane filter will allow us to estimate cloud-top heights. Images of the cloud-tops will aid in understanding the data collected by other instruments on Juno that probe deeper in the atmosphere. During the two months that Jupiter is too close to the sun for ground-based observers to collect data, JunoCam will take images routinely to monitor large-scale features. Occasional, opportunistic images of the Galilean moons will be acquired.
NASA Astrophysics Data System (ADS)
Li, J.; Menzel, W.; Sun, F.; Schmit, T.
2003-12-01
The Moderate-Resolution Imaging Spectroradiometer (MODIS) and Atmospheric Infrared Sounder (AIRS) measurements from the Earth Observing System's (EOS) Aqua satellite will enable global monitoring of the distribution of clouds. MODIS is able to provide at high spatial resolution (1 ~ 5km) the cloud mask, surface and cloud types, cloud phase, cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), and cloud water path (CWP). AIRS is able to provide CTP, ECA, CPS, and CWP within the AIRS footprint with much better accuracy using its greatly enhanced hyperspectral remote sensing capability. The combined MODIS / AIRS system offers the opportunity for cloud products improved over those possible from either system alone. The algorithm developed was applied to process the AIRS longwave cloudy radiance measurements; results are compared with MODIS cloud products, as well as with the Geostationary Operational Environmental Satellite (GOES) sounder cloud products, to demonstrate the advantage of synergistic use of high spatial resolution MODIS cloud products and high spectral resolution AIRS sounder radiance measurements for optimal cloud retrieval. Data from ground-based instrumentation at the Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Test Bed (CART) in Oklahoma were used for the validation; results show that AIRS improves the MODIS cloud products in certain cases such as low-level clouds.
3DNOW: Image-Based 3d Reconstruction and Modeling via Web
NASA Astrophysics Data System (ADS)
Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.
2018-05-01
This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.
Passive Fully Polarimetric W-Band Millimeter-Wave Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Kelly, James F.; Sheen, David M.
2012-04-01
We present the theory, design, and experimental results obtained from a scanning passive W-band fully polarimetric imager. Passive millimeter-wave imaging offers persistent day/nighttime imaging and the ability to penetrate dust, clouds and other obscurants, including clothing and dry soil. The single-pixel scanning imager includes both far-field and near-field fore-optics for investigation of polarization phenomena. Using both fore-optics, a variety of scenes including natural and man-made objects was imaged and these results are presented showing the utility of polarimetric imaging for anomaly detection. Analysis includes conventional Stokes-parameter based approaches as well as multivariate image analysis methods.
Synergistic Use of MODIS and AIRS in a Variational Retrieval of Cloud Parameters.
NASA Astrophysics Data System (ADS)
Li, Jun; Menzel, W. Paul; Zhang, Wenjian; Sun, Fengying; Schmit, Timothy J.; Gurka, James J.; Weisz, Elisabeth
2004-11-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) and the Atmospheric Infrared Sounder (AIRS) measurements from the Earth Observing System's (EOS's) Aqua satellite enable global monitoring of the distribution of clouds. MODIS is able to provide a cloud mask, surface and cloud types, cloud phase, cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size, and cloud optical thickness at high spatial resolution (1 5 km). The combined MODIS AIRS system offers the opportunity for improved cloud products, better than from either system alone; this improvement is demonstrated in this paper with both simulated and real radiances. A one-dimensional variational (1DVAR) methodology is used to retrieve the CTP and ECA from AIRS longwave (650 790 cm-1 or 15.38 12.65 μm) cloudy radiance measurements (hereinafter referred to as MODIS AIRS 1DVAR). The MODIS AIRS 1DVAR cloud properties show significant improvement over the MODIS-alone cloud properties and slight improvement over AIRS-alone cloud properties in a simulation study, while MODIS AIRS 1DVAR is much more computationally efficient than the AIRS-alone 1DVAR; comparisons with radiosonde observations show that CTPs improve by 10 40 hPa for MODIS AIRS CTPs over those from MODIS alone. The 1DVAR approach is applied to process the AIRS longwave cloudy radiance measurements; results are compared with MODIS and Geostationary Operational Environmental Satellite sounder cloud products. Data from ground-based instrumentation at the Atmospheric Radiation Measurement Program Cloud and Radiation Test Bed in Oklahoma are used for validation; results show that MODIS AIRS improves the MODIS CTP, especially in low-level clouds. The operational use of a high-spatial-resolution imager, along with information from a high-spectral-resolution sounder will be possible with instruments planned for the next-generation geostationary operational instruments.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.
2017-12-01
Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.
An Automatic Cloud Mask Algorithm Based on Time Series of MODIS Measurements
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Wang, Yujie; Frey, R.
2008-01-01
Quality of aerosol retrievals and atmospheric correction depends strongly on accuracy of the cloud mask (CM) algorithm. The heritage CM algorithms developed for AVHRR and MODIS use the latest sensor measurements of spectral reflectance and brightness temperature and perform processing at the pixel level. The algorithms are threshold-based and empirically tuned. They don't explicitly address the classical problem of cloud search, wherein the baseline clear-skies scene is defined for comparison. Here, we report on a new CM algorithm which explicitly builds and maintains a reference clear-skies image of the surface (refcm) using a time series of MODIS measurements. The new algorithm, developed as part of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm for MODIS, relies on fact that clear-skies images of the same surface area have a common textural pattern, defined by the surface topography, boundaries of rivers and lakes, distribution of soils and vegetation etc. This pattern changes slowly given the daily rate of global Earth observations, whereas clouds introduce high-frequency random disturbances. Under clear skies, consecutive gridded images of the same surface area have a high covariance, whereas in presence of clouds covariance is usually low. This idea is central to initialization of refcm which is used to derive cloud mask in combination with spectral and brightness temperature tests. The refcm is continuously updated with the latest clear-skies MODIS measurements, thus adapting to seasonal and rapid surface changes. The algorithm is enhanced by an internal dynamic land-water-snow classification coupled with a surface change mask. An initial comparison shows that the new algorithm offers the potential to perform better than the MODIS MOD35 cloud mask in situations where the land surface is changing rapidly, and over Earth regions covered by snow and ice.
Physical modeling of 3D and 4D laser imaging
NASA Astrophysics Data System (ADS)
Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard
2010-04-01
Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.
A Cost-Benefit Study of Doing Astrophysics On The Cloud: Production of Image Mosaics
NASA Astrophysics Data System (ADS)
Berriman, G. B.; Good, J. C. Deelman, E.; Singh, G. Livny, M.
2009-09-01
Utility grids such as the Amazon EC2 and Amazon S3 clouds offer computational and storage resources that can be used on-demand for a fee by compute- and data-intensive applications. The cost of running an application on such a cloud depends on the compute, storage and communication resources it will provision and consume. Different execution plans of the same application may result in significantly different costs. We studied via simulation the cost performance trade-offs of different execution and resource provisioning plans by creating, under the Amazon cloud fee structure, mosaics with the Montage image mosaic engine, a widely used data- and compute-intensive application. Specifically, we studied the cost of building mosaics of 2MASS data that have sizes of 1, 2 and 4 square degrees, and a 2MASS all-sky mosaic. These are examples of mosaics commonly generated by astronomers. We also study these trade-offs in the context of the storage and communication fees of Amazon S3 when used for long-term application data archiving. Our results show that by provisioning the right amount of storage and compute resources cost can be significantly reduced with no significant impact on application performance.
What's Old is New in the Large Magellanic Cloud
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Poster Version Large Magellanic Cloud This vibrant image from NASA's Spitzer Space Telescope shows the Large Magellanic Cloud, a satellite galaxy to our own Milky Way galaxy. The infrared image, a mosaic of 300,000 individual tiles, offers astronomers a unique chance to study the lifecycle of stars and dust in a single galaxy. Nearly one million objects are revealed for the first time in this Spitzer view, which represents about a 1,000-fold improvement in sensitivity over previous space-based missions. Most of the new objects are dusty stars of various ages populating the Large Magellanic Cloud; the rest are thought to be background galaxies. The blue color in the picture, seen most prominently in the central bar, represents starlight from older stars. The chaotic, bright regions outside this bar are filled with hot, massive stars buried in thick blankets of dust. The red color around these bright regions is from dust heated by stars, while the red dots scattered throughout the picture are either dusty, old stars or more distant galaxies. The greenish clouds contain cooler interstellar gas and molecular-sized dust grains illuminated by ambient starlight. Astronomers say this image allows them to quantify the process by which space dust -- the same stuff that makes up planets and even people -- is recycled in a galaxy. The picture shows dust at its three main cosmic hangouts: around the young stars, where it is being consumed (red-tinted, bright clouds); scattered about in the space between stars (greenish clouds); and in expelled shells of material from old stars (randomly-spaced red dots). The Large Magellanic Cloud, located 160,000 light-years from Earth, is one of a handful of dwarf galaxies that orbit our own Milky Way. It is approximately one-third as wide as the Milky Way, and, if it could be seen in its entirety, would cover the same amount of sky as a grid of about 480 full moons. About one-third of the entire galaxy can be seen in the Spitzer image. This picture is a composite of infrared light captured by Spitzer. Light with wavelengths of 3.6 (blue) and 8 (green) microns was captured by the telescope's infrared array camera; 24-micron light (red) was detected by the multiband imaging photometer.Multi-provider architecture for cloud outsourcing of medical imaging repositories.
Godinho, Tiago Marques; Bastião Silva, Luís A; Costa, Carlos; Oliveira, José Luís
2014-01-01
Over the last few years, the extended usage of medical imaging procedures has raised the medical community attention towards the optimization of their workflows. More recently, the federation of multiple institutions into a seamless distribution network has brought hope of increased quality healthcare services along with more efficient resource management. As a result, medical institutions are constantly looking for the best infrastructure to deploy their imaging archives. In this scenario, public cloud infrastructures arise as major candidates, as they offer elastic storage space, optimal data availability without great requirements of maintenance costs or IT personnel, in a pay-as-you-go model. However, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. This document proposes a multi-provider architecture for integration of outsourced archives with in-house PACS resources, taking advantage of foreign providers to store medical imaging studies, without disregarding security. It enables the retrieval of images from multiple archives simultaneously, improving performance, data availability and avoiding the vendor-locking problem. Moreover it enables load balancing and cache techniques.
NASA Astrophysics Data System (ADS)
Pugnaghi, Sergio; Guerrieri, Lorenzo; Corradini, Stefano; Merucci, Luca
2016-07-01
Volcanic plume removal (VPR) is a procedure developed to retrieve the ash optical depth, effective radius and mass, and sulfur dioxide mass contained in a volcanic cloud from the thermal radiance at 8.7, 11, and 12 µm. It is based on an estimation of a virtual image representing what the sensor would have seen in a multispectral thermal image if the volcanic cloud were not present. Ash and sulfur dioxide were retrieved by the first version of the VPR using a very simple atmospheric model that ignored the layer above the volcanic cloud. This new version takes into account the layer of atmosphere above the cloud as well as thermal radiance scattering along the line of sight of the sensor. In addition to improved results, the new version also offers an easier and faster preliminary preparation and includes other types of volcanic particles (andesite, obsidian, pumice, ice crystals, and water droplets). As in the previous version, a set of parameters regarding the volcanic area, particle types, and sensor is required to run the procedure. However, in the new version, only the mean plume temperature is required as input data. In this work, a set of parameters to compute the volcanic cloud transmittance in the three quoted bands, for all the aforementioned particles, for both Mt. Etna (Italy) and Eyjafjallajökull (Iceland) volcanoes, and for the Terra and Aqua MODIS instruments is presented. Three types of tests are carried out to verify the results of the improved VPR. The first uses all the radiative transfer simulations performed to estimate the above mentioned parameters. The second one makes use of two synthetic images, one for Mt. Etna and one for Eyjafjallajökull volcanoes. The third one compares VPR and Look-Up Table (LUT) retrievals analyzing the true image of Eyjafjallajökull volcano acquired by MODIS aboard the Aqua satellite on 11 May 2010 at 14:05 GMT.
NASA Technical Reports Server (NTRS)
Ippolito, L. J.; Kaul, R.
1981-01-01
Rainfall which is regarded as one of the more important observations for the measurements of this most variable parameter was made continuously, across large areas and over the sea. Ships could not provide the needed resolution nor could available radars provide the needed breadth of coverage. Microwave observations from the Nimbus-5 satellite offered some hope. Another possibility was suggested by the results of many comparisons between rainfall and the clouds seen in satellite pictures. Sequences of pictures from the first geostationary satellites were employed and a general correspondence between rain and the convective clouds visible in satellite pictures was found. It was demonstrated that the agreement was best for growing clouds. The development methods to infer GATE rainfall from geostationary satellite images are examined.
NASA Astrophysics Data System (ADS)
Benze, Susanne; Gumbel, Jörg; Randall, Cora E.; Karlsson, Bodil; Hultgren, Kristoffer; Lumpe, Jerry D.; Baumgarten, Gerd
2018-01-01
Combining limb and nadir satellite observations of Polar Mesospheric Clouds (PMCs) has long been recognized as problematic due to differences in observation geometry, scattering conditions, and retrieval approaches. This study offers a method of comparing PMC brightness observations from the nadir-viewing Aeronomy of Ice in the Mesosphere (AIM) Cloud Imaging and Particle Size (CIPS) instrument and the limb-viewing Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS). OSIRIS and CIPS measurements are made comparable by defining a common volume for overlapping OSIRIS and CIPS observations for two northern hemisphere (NH) PMC seasons: NH08 and NH09. We define a scattering intensity quantity that is suitable for either nadir or limb observations and for different scattering conditions. A known CIPS bias is applied, differences in instrument sensitivity are analyzed and taken into account, and effects of cloud inhomogeneity and common volume definition on the comparison are discussed. Not accounting for instrument sensitivity differences or inhomogeneities in the PMC field, the mean relative difference in cloud brightness (CIPS - OSIRIS) is -102 ± 55%. The differences are largest for coincidences with very inhomogeneous clouds that are dominated by pixels that CIPS reports as non-cloud points. Removing these coincidences, the mean relative difference in cloud brightness reduces to -6 ± 14%. The correlation coefficient between the CIPS and OSIRIS measurements of PMC brightness variations in space and time is remarkably high, at 0.94. Overall, the comparison shows excellent agreement despite different retrieval approaches and observation geometries.
Cloud Computing and Its Applications in GIS
NASA Astrophysics Data System (ADS)
Kang, Cao
2011-12-01
Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
NASA Astrophysics Data System (ADS)
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
New insights about cloud vertical structure from CloudSat and CALIPSO observations
NASA Astrophysics Data System (ADS)
Oreopoulos, Lazaros; Cho, Nayeong; Lee, Dongmin
2017-09-01
Active cloud observations from A-Train's CloudSat and CALIPSO satellites offer new opportunities to examine the vertical structure of hydrometeor layers. We use the 2B-CLDCLASS-LIDAR merged CloudSat-CALIPSO product to examine global aspects of hydrometeor vertical stratification. We group the data into major cloud vertical structure (CVS) classes based on our interpretation of how clouds in three standard atmospheric layers overlap and provide their global frequency of occurrence. The two most frequent CVS classes are single-layer (per our definition) low and high clouds that represent 53% of cloudy skies, followed by high clouds overlying low clouds, and vertically extensive clouds that occupy near-contiguously a large portion of the troposphere. The prevalence of these configurations changes seasonally and geographically, between daytime and nighttime, and between continents and oceans. The radiative effects of the CVS classes reveal the major radiative warmers and coolers from the perspective of the planet as a whole, the surface, and the atmosphere. Single-layer low clouds dominate planetary and atmospheric cooling and thermal infrared surface warming. We also investigate the consistency between passive and active views of clouds by providing the CVS breakdowns of Moderate Resolution Imaging Spectroradiometer cloud regimes for spatiotemporally coincident MODIS-Aqua (also on the A-Train) and CloudSat-CALIPSO daytime observations. When the analysis is expanded for a more in-depth look at the most heterogeneous of the MODIS cloud regimes, it ultimately confirms previous interpretations of their makeup that did not have the benefit of collocated active observations.
What Old is New in the Large Magellanic Cloud
2006-09-01
This vibrant image from NASA's Spitzer Space Telescope shows the Large Magellanic Cloud, a satellite galaxy to our own Milky Way galaxy. The infrared image, a mosaic of 300,000 individual tiles, offers astronomers a unique chance to study the lifecycle of stars and dust in a single galaxy. Nearly one million objects are revealed for the first time in this Spitzer view, which represents about a 1,000-fold improvement in sensitivity over previous space-based missions. Most of the new objects are dusty stars of various ages populating the Large Magellanic Cloud; the rest are thought to be background galaxies. The blue color in the picture, seen most prominently in the central bar, represents starlight from older stars. The chaotic, bright regions outside this bar are filled with hot, massive stars buried in thick blankets of dust. The red color around these bright regions is from dust heated by stars, while the red dots scattered throughout the picture are either dusty, old stars or more distant galaxies. The greenish clouds contain cooler interstellar gas and molecular-sized dust grains illuminated by ambient starlight. Astronomers say this image allows them to quantify the process by which space dust -- the same stuff that makes up planets and even people -- is recycled in a galaxy. The picture shows dust at its three main cosmic hangouts: around the young stars, where it is being consumed (red-tinted, bright clouds); scattered about in the space between stars (greenish clouds); and in expelled shells of material from old stars (randomly-spaced red dots). The Large Magellanic Cloud, located 160,000 light-years from Earth, is one of a handful of dwarf galaxies that orbit our own Milky Way. It is approximately one-third as wide as the Milky Way, and, if it could be seen in its entirety, would cover the same amount of sky as a grid of about 480 full moons. About one-third of the entire galaxy can be seen in the Spitzer image. This picture is a composite of infrared light captured by Spitzer. Light with wavelengths of 3.6 (blue) and 8 (green) microns was captured by the telescope's infrared array camera; 24-micron light (red) was detected by the multiband imaging photometer. http://photojournal.jpl.nasa.gov/catalog/PIA07137
An efficient framework for modeling clouds from Landsat8 images
NASA Astrophysics Data System (ADS)
Yuan, Chunqiang; Guo, Jing
2015-03-01
Cloud plays an important role in creating realistic outdoor scenes for video game and flight simulation applications. Classic methods have been proposed for cumulus cloud modeling. However, these methods are not flexible for modeling large cloud scenes with hundreds of clouds in that the user must repeatedly model each cloud and adjust its various properties. This paper presents a meteorologically based method to reconstruct cumulus clouds from high resolution Landsat8 satellite images. From these input satellite images, the clouds are first segmented from the background. Then, the cloud top surface is estimated from the temperature of the infrared image. After that, under a mild assumption of flat base for cumulus cloud, the base height of each cloud is computed by averaging the top height for pixels on the cloud edge. Then, the extinction is generated from the visible image. Finally, we enrich the initial shapes of clouds using a fractal method and represent the recovered clouds as a particle system. The experimental results demonstrate our method can yield realistic cloud scenes resembling those in the satellite images.
CloudSat Image of Tropical Thunderstorms Over Africa
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Figure 1 CloudSat image of a horizontal cross-section of tropical clouds and thunderstorms over east Africa. The red colors are indicative of highly reflective particles such as water (rain) or ice crystals, which the blue indicates thinner clouds (such as cirrus). The flat green/blue lines across the bottom represent the ground signal. The vertical scale on the CloudS at Cloud Profiling Radar image is approximately 30 kilometers (19 miles). The brown line below the image indicates the relative elevation of the land surface. The inset image shows the CloudSat track relative to a Moderate Resolution Imaging Spectroradiometer (MODIS) visible image taken at nearly the same time.NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-04-20
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Ling-Ling; Hao, Hong-Xia
2014-01-01
The goal of pan-sharpening is to get an image with higher spatial resolution and better spectral information. However, the resolution of the pan-sharpened image is seriously affected by the thin clouds. For a single image, filtering algorithms are widely used to remove clouds. These kinds of methods can remove clouds effectively, but the detail lost in the cloud removal image is also serious. To solve this problem, a pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform (NSST) is proposed. For the low-resolution multispectral (LR MS) and high-resolution panchromatic images with thin clouds, a mask dodging method is used to remove clouds. For the cloud removal LR MS image, an adaptive principal component analysis transform is proposed to balance the spectral information and spatial resolution in the pan-sharpened image. Since the clouds removal process causes the detail loss problem, a weight matrix is designed to enhance the details of the cloud regions in the pan-sharpening process, but noncloud regions remain unchanged. And the details of the image are obtained by NSST. Experimental results over visible and evaluation metrics demonstrate that the proposed method can keep better spectral information and spatial resolution, especially for the images with thin clouds.
USDA-ARS?s Scientific Manuscript database
Thick cloud contaminations in Landsat images limit their regular usage for land applications. A few methods have been developed to remove thick clouds using additional cloud-free images. Unfortunately, the cloud-free composition image produced by existing methods commonly lacks from the desired spat...
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
NASA Astrophysics Data System (ADS)
Diner, David
2010-05-01
The Multi-angle Imaging SpectroRadiometer (MISR) instrument has been collecting global Earth data from NASA's Terra satellite since February 2000. With its 9 along-track view angles, 4 spectral bands, intrinsic spatial resolution of 275 m, and stable radiometric and geometric calibration, no instrument that combines MISR's attributes has previously flown in space, nor is there is a similar capability currently available on any other satellite platform. Multiangle imaging offers several tools for remote sensing of aerosol and cloud properties, including bidirectional reflectance and scattering measurements, stereoscopic pattern matching, time lapse sequencing, and potentially, optical tomography. Current data products from MISR employ several of these techniques. Observations of the intensity of scattered light as a function of view angle and wavelength provide accurate measures of aerosol optical depths (AOD) over land, including bright desert and urban source regions. Partitioning of AOD according to retrieved particle classification and incorporation of height information improves the relationship between AOD and surface PM2.5 (fine particulate matter, a regulated air pollutant), constituting an important step toward a satellite-based particulate pollution monitoring system. Stereoscopic cloud-top heights provide a unique metric for detecting interannual variability of clouds and exceptionally high quality and sensitivity for detection and height retrieval for low-level clouds. Using the several-minute time interval between camera views, MISR has enabled a pole-to-pole, height-resolved atmospheric wind measurement system. Stereo imagery also makes possible global measurement of the injection heights and advection speeds of smoke plumes, volcanic plumes, and dust clouds, for which a large database is now available. To build upon what has been learned during the first decade of MISR observations, we are evaluating algorithm updates that not only refine retrieval accuracies but also include enhancements (e.g., finer spatial resolution) that would have been computationally prohibitive just ten years ago. In addition, we are developing technological building blocks for future sensors that enable broader spectral coverage, wider swath, and incorporation of high-accuracy polarimetric imaging. Prototype cameras incorporating photoelastic modulators have been constructed. To fully capitalize on the rich information content of the current and next-generation of multiangle imagers, several algorithmic paradigms currently employed need to be re-examined, e.g., the use of aerosol look-up tables, neglect of 3-D effects, and binary partitioning of the atmosphere into "cloudy" or "clear" designations. Examples of progress in algorithm and technology developments geared toward advanced application of multiangle imaging to remote sensing of aerosols and clouds will be presented.
Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking
NASA Astrophysics Data System (ADS)
Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira
2008-03-01
Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.
NASA Astrophysics Data System (ADS)
Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin
2017-08-01
Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.
CloudSat Image of a Polar Night Storm Near Antarctica
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Figure 1 CloudSat image of a horizontal cross-section of a polar night storm near Antarctica. Until now, clouds have been hard to observe in polar regions using remote sensing, particularly during the polar winter or night season. The red colors are indicative of highly reflective particles such as water (rain) or ice crystals, while the blue indicates thinner clouds (such as cirrus). The flat green/blue lines across the bottom represent the ground signal. The vertical scale on the CloudSat Cloud Profiling Radar image is approximately 30 kilometers (19 miles). The blue line below the Cloud Profiling Radar image indicates that the data were taken over water; the brown line below the image indicates the relative elevation of the land surface. The inset image shows the CloudSat track relative to a Moderate Resolution Imaging Spectroradiometer (MODIS) infrared image taken at nearly the same time.Volunteered Cloud Computing for Disaster Management
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.
Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation
NASA Technical Reports Server (NTRS)
Platnick, Steven E.
2011-01-01
The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.
CloudSat First Image of a Warm Front Storm Over the Norwegian Sea
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Figure 1 CloudSat's first image, of a warm front storm over the Norwegian Sea, was obtained on May 20, 2006. In this horizontal cross-section of clouds, warm air is seen rising over colder air as the satellite travels from right to left. The red colors are indicative of highly reflective particles such as water droplets (or rain) or larger ice crystals (or snow), while the blue indicates thinner clouds (such as cirrus). The flat green/blue lines across the bottom represent the ground signal. The vertical scale on the CloudSat Cloud Profiling Radar image is approximately 30 kilometers (19 miles). The blue line below the Cloud Profiling Radar image indicates that the data were taken over water. The inset image shows the CloudSat track relative to a Moderate Resolution Imaging Spectroradiometer (MODIS) infrared image taken at nearly the same time.Cloud Height Maps for Hurricanes Frances and Ivan
NASA Technical Reports Server (NTRS)
2004-01-01
NASA's Multi-angle Imaging SpectroRadiometer (MISR) captured these images and cloud-top height retrievals of Hurricane Frances on September 4, 2004, when the eye sat just off the coast of eastern Florida, and Hurricane Ivan on September 5th, after this cyclone had devastated Grenada and was heading toward the central and western Caribbean. Hurricane Frances made landfall in the early hours of September 5, and was downgraded to Tropical Storm status as it swept inland through the Florida panhandle and continued northward. On the heels of Frances is Hurricane Ivan, which is on record as the strongest tropical cyclone to form at such a low latitude in the Atlantic, and was the most powerful hurricane to have hit the Caribbean in nearly a decade. The ability of forecasters to predict the intensity and amount of rainfall associated with hurricanes still requires improvement, especially on the 24 to 48 hour timescale vital for disaster planning. To improve the operational models used to make hurricane forecasts, scientists need to better understand the multi-scale interactions at the cloud, mesoscale and synoptic scales that lead to hurricane intensification and dissipation, and the various physical processes that affect hurricane intensity and rainfall distributions. Because these uncertainties with regard to how to represent cloud processes still exist, it is vital that the model findings be evaluated against hurricane observations whenever possible. Two-dimensional maps of cloud height such as those shown here offer an unprecedented opportunity for comparing simulated cloud fields against actual hurricane observations. The left-hand panel in each image pair is a natural color view from MISR's nadir camera. The right-hand panels are cloud-top height retrievals produced by automated computer recognition of the distinctive spatial features between images acquired at different view angles. These results indicate that at the time that these images were acquired, clouds within Frances and Ivan had attained altitudes of 15 kilometers and 16 kilometers above sea level, respectively. The height fields pictured here are uncorrected for the effects of cloud motion. Wind-corrected heights (which have higher accuracy but sparser spatial coverage) are within about 1 kilometer of the heights shown here. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 25081 and 25094. The panels cover an area of 380 kilometers x 924 kilometers, and utilize data from within blocks 65 to 87 within World Reference System-2 paths 14 and 222, respectively. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California InAerosol-cloud interaction determined by satellite data over the Baltic Sea countries
NASA Astrophysics Data System (ADS)
Saponaro, Giulia; Kolmonen, Pekka; Sogacheva, Larisa; de Leeuw, Gerrit
2015-04-01
The present study investigates the use of long-term satellite data to assess the influence of aerosols upon cloud parameters over the Baltic Sea region. This particular area offers the contrast of a very clean environment (Fennoscandia) against a more polluted one (Germany, Poland). The datasets consists of Collection 6 Level 3 daily observations from 2002 to 2014 collected by the NASA's Moderate-Resolution Imaging Spectrometer (MODIS) instrument on-board the Aqua platform. The MODIS aerosol optical depth (AOD) product is used as a proxy for the number concentration of aerosol particles while the cloud effective radius (CER) and cloud optical thickness (COT) describe cloud microphysical and optical properties respectively. Satellite data have certain limitations, such as the restriction to summer season due to solar zenith angle restrictions and the known problem of the ambiguity of the aerosol-cloud interface, for instance. Through the analysis of a 12-years dataset, distribution maps provide information on a regional scale about the first aerosol indirect effect (AIE) by determining the aerosol-cloud interaction (ACI). The ACI is defined as the change in cloud optical depth or effective radius as a function of aerosol load for a fixed liquid water path (LWP). The focusing point of the current study is the evaluation of regional trends of ACI over the observed area of the Baltic Sea.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-01-01
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957
NASA Astrophysics Data System (ADS)
Shaw, J. A.; Nugent, P. W.
2016-12-01
Ground-based longwave-infrared (LWIR) cloud imaging can provide continuous cloud measurements in the Arctic. This is of particular importance during the Arctic winter when visible wavelength cloud imaging systems cannot operate. This method uses a thermal infrared camera to observe clouds and produce measurements of cloud amount and cloud optical depth. The Montana State University Optical Remote Sensor Laboratory deployed an infrared cloud imager (ICI) at the Atmospheric Radiation Monitoring North Slope of Alaska site at Barrow, AK from July 2012 through July 2014. This study was used to both understand the long-term operation of an ICI in the Arctic and to study the consistency of the ICI data products in relation to co-located active and passive sensors. The ICI was found to have a high correlation (> 0.92) with collocated cloud instruments and to produce an unbiased data product. However, the ICI also detects thin clouds that are not detected by most operational cloud sensors. Comparisons with high-sensitivity actively sensed cloud products confirm the existence of these thin clouds. Infrared cloud imaging systems can serve a critical role in developing our understanding of cloud cover in the Arctic by provided a continuous annual measurement of clouds at sites of interest.
SN1987A IN THE LARGE MAGELLANIC CLOUD
NASA Technical Reports Server (NTRS)
2002-01-01
Glittering stars and wisps of gas create a breathtaking backdrop for the self-destruction of a massive star, called supernova 1987A, in the Large Magellanic Cloud, a nearby galaxy. Astronomers in the Southern hemisphere witnessed the brilliant explosion of this star on Feb. 23, 1987. Shown in this NASA Hubble Space Telescope image, the supernova remnant, surrounded by inner and outer rings of material, is set in a forest of ethereal, diffuse clouds of gas. This three-color image is composed of several pictures of the supernova and its neighboring region taken with the Wide Field and Planetary Camera 2 in Sept. 1994, Feb. 1996 and July 1997. The many bright blue stars nearby the supernova are massive stars, each more than six times heftier than our Sun. With ages of about 12 million years old, they are members of the same generation of stars as the star that went supernova. The presence of bright gas clouds is another sign of the youth of this region, which still appears to be a fertile breeding ground for new stars. In a few years the supernova's fast moving material will sweep the inner ring with full force, heating and exciting its gas, and will produce a new series of cosmic fireworks that will offer a striking view for more than a decade. Credit: Hubble Heritage Team (AURA/STScI/NASA)
NASA Astrophysics Data System (ADS)
Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris
2017-04-01
Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.
Web-based platform for collaborative medical imaging research
NASA Astrophysics Data System (ADS)
Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.
2015-03-01
Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.
NASA Astrophysics Data System (ADS)
Jinya, John; Bipasha, Paul S.
2016-05-01
Clouds strongly modulate the Earths energy balance and its atmosphere through their interaction with the solar and terrestrial radiation. They interact with radiation in various ways like scattering, emission and absorption. By observing these changes in radiation at different wavelength, cloud properties can be estimated. Cloud properties are of utmost importance in studying different weather and climate phenomena. At present, no satellite provides cloud microphysical parameters over the Indian region with high temporal resolution. INSAT-3D imager observations in 6 spectral channels from geostationary platform offer opportunity to study continuous cloud properties over Indian region. Visible (0.65 μm) and shortwave-infrared (1.67 μm) channel radiances can be used to retrieve cloud microphysical parameters such as cloud optical thickness (COT) and cloud effective radius (CER). In this paper, we have carried out a feasibility study with the objective of cloud microphysics retrieval. For this, an inter-comparison of 15 globally available radiative transfer models (RTM) were carried out with the aim of generating a Look-up- Table (LUT). SBDART model was chosen for the simulations. The sensitivity of each spectral channel to different cloud properties was investigated. The inputs to the RT model were configured over our study region (50°S - 50°N and 20°E - 130°E) and a large number of simulations were carried out using random input vectors to generate the LUT. The determination of cloud optical thickness and cloud effective radius from spectral reflectance measurements constitutes the inverse problem and is typically solved by comparing the measured reflectances with entries in LUT and searching for the combination of COT and CER that gives the best fit. The products are available on the website www.mosdac.gov.in
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Schwartz, Stephen E.; Yu, Dantong
Clouds are a central focus of the U.S. Department of Energy (DOE)’s Atmospheric System Research (ASR) program and Atmospheric Radiation Measurement (ARM) Climate Research Facility, and more broadly are the subject of much investigation because of their important effects on atmospheric radiation and, through feedbacks, on climate sensitivity. Significant progress has been made by moving from a vertically pointing (“soda-straw”) to a three-dimensional (3D) view of clouds by investing in scanning cloud radars through the American Recovery and Reinvestment Act of 2009. Yet, because of the physical nature of radars, there are key gaps in ARM's cloud observational capabilities. Formore » example, cloud radars often fail to detect small shallow cumulus and thin cirrus clouds that are nonetheless radiatively important. Furthermore, it takes five to twenty minutes for a cloud radar to complete a 3D volume scan and clouds can evolve substantially during this period. Ground-based stereo-imaging is a promising technique to complement existing ARM cloud observation capabilities. It enables the estimation of cloud coverage, height, horizontal motion, morphology, and spatial arrangement over an extended area of up to 30 by 30 km at refresh rates greater than 1 Hz (Peng et al. 2015). With fine spatial and temporal resolution of modern sky cameras, the stereo-imaging technique allows for the tracking of a small cumulus cloud or a thin cirrus cloud that cannot be detected by a cloud radar. With support from the DOE SunShot Initiative, the Principal Investigator (PI)’s team at Brookhaven National Laboratory (BNL) has developed some initial capability for cloud tracking using multiple distinctly located hemispheric cameras (Peng et al. 2015). To validate the ground-based cloud stereo-imaging technique, the cloud stereo-imaging field campaign was conducted at the ARM Facility’s Southern Great Plains (SGP) site in Oklahoma from July 15 to December 24. As shown in Figure 1, the cloud stereo-imaging system consisted of two inexpensive high-definition (HD) hemispheric cameras (each cost less than $1,500) and ARM’s Total Sky Imager (TSI). Together with other co-located ARM instrumentation, the campaign provides a promising opportunity to validate stereo-imaging-based cloud base height and, more importantly, to examine the feasibility of cloud thickness retrieval for low-view-angle clouds.« less
Qu, Wei-ping; Liu, Wen-qing; Liu, Jian-guo; Lu, Yi-huai; Zhu, Jun; Qin, Min; Liu, Cheng
2006-11-01
In satellite remote-sensing detection, cloud as an interference plays a negative role in data retrieval. How to discern the cloud fields with high fidelity thus comes as a need to the following research. A new method rooting in atmospheric radiation characteristics of cloud layer, in the present paper, presents a sort of solution where single-band brightness variance ratio is used to detect the relative intensity of cloud clutter so as to delineate cloud field rapidly and exactly, and the formulae of brightness variance ratio of satellite image, image reflectance variance ratio, and brightness temperature variance ratio of thermal infrared image are also given to enable cloud elimination to produce data free from cloud interference. According to the variance of the penetrating capability for different spectra bands, an objective evaluation is done on cloud penetration of them with the factors that influence penetration effect. Finally, a multi-band data fusion task is completed using the image data of infrared penetration from cirrus nothus. Image data reconstruction is of good quality and exactitude to show the real data of visible band covered by cloud fields. Statistics indicates the consistency of waveband relativity with image data after the data fusion.
Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen
2016-11-01
To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.
A High Definition View of AGN Feedback: Chandra Imaging of Nearby Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Wang, Junfeng; Fabbiano, G.; Risaliti, G.; Elvis, M.; Karovska, M.; Zezas, A.; Mundell, C. G.
2010-03-01
To improve the physics of AGN feedback, it is crucial to evaluate the true role of outflows on galaxy evolution observationally. I will present new results from Chandra spectral imaging of nearby Seyfert galaxies, which offer unique opportunities to examine feedback in action in much greater detail than at high redshift. Exploiting Chandra's highest possible resolution, we are able to study structures in NGC 4151 on spatial scales of 0.5 arcsec (30 pc), showing an extended X-ray morphology overall consistent with the optical NLR. We find that most of the NLR clouds in NGC 4151 have [OIII] to soft X-ray ratio consistent with the values observed in NLRs of some Seyfert 2 galaxies, which indicates a uniform ionization parameter even at large radii. We examine various X-ray emission mechanisms of the radio jet and consider thermal emission from interaction between radio outflow and the NLR clouds the most probable origin for the X-ray emission associated with the jet.
NASA Astrophysics Data System (ADS)
Doelling, David R.; Bhatt, Rajendra; Haney, Conor O.; Gopalan, Arun; Scarino, Benjamin R.
2017-09-01
The new 3rd generation geostationary (GEO) imagers will have many of the same NPP-VIIRS imager spectral bands, thereby offering the opportunity to apply the VIIRS cloud, aerosol, and land use retrieval algorithms on the new GEO imager measurements. Climate quality retrievals require multi-channel calibrated radiances that are stable over time. The deep convective cloud calibration technique (DCCT) is a large ensemble statistical technique that assumes that the DCC reflectance is stable over time. Because DCC are found in sufficient numbers across all GEO domains, they provide a uniform calibration stability evaluation across the GEO constellation. The baseline DCCT has been successful in calibrating visible and near-infrared channels. However, for shortwave infrared (SWIR) channels the DCCT is not as effective to monitor radiometric stability. The DCCT was optimized as a function wavelength in this paper. For SWIR bands, the greatest reduction of the DCC response trend standard error was achieved through deseasonalization. This is effective because the DCC reflectance exhibits small regional seasonal cycles that can be characterized on a monthly basis. On the other hand, the inter-annually variability in DCC response was found to be extremely small. The Met-9 0.65-μm channel DCC response was found to have a 3% seasonal cycle. Deseasonalization reduced the trend standard error from 1% to 0.4%. For the NPP-VIIRS SWIR bands, deseasonalization reduced the trend standard error by more than half. All VIIRS SWIR band trend standard errors were less than 1%. The DCCT should be able to monitor the stability of all GEO imager solar reflective bands across the tropical domain with the same uniform accuracy.
MODIS Views Variations in Cloud Types
NASA Technical Reports Server (NTRS)
2002-01-01
This MODIS image, centered over the Great Lakes region in North America, shows a variety of cloud types. The clouds at the top of the image, colored pink, are cold, high-level snow and ice clouds, while the neon green clouds are lower-level water clouds. Because different cloud types reflect and emit radiant energy differently, scientists can use MODIS' unique data set to measure the sizes of cloud particles and distinguish between water, snow, and ice clouds. This scene was acquired on Feb. 24, 2000, and is a red, green, blue composite of bands 1, 6, and 31 (0.66, 1.6, and 11.0 microns, respectively). Image by Liam Gumley, Space Science and Engineering Center, University of Wisconsin-Madison
NASA Astrophysics Data System (ADS)
Furht, Borko
In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
2006-09-01
This vibrant image from NASA's Spitzer Space Telescope shows the Large Magellanic Cloud, a satellite galaxy to our own Milky Way galaxy. The infrared image, a mosaic of more than 100,000 individual tiles, offers astronomers a unique chance to study the lifecycle of stars and dust in a single galaxy. Nearly one million objects are revealed for the first time in this Spitzer view, which represents about a 1,000-fold improvement in sensitivity over previous space-based missions. Most of the new objects are dusty stars of various ages populating the Large Magellanic Cloud; the rest are thought to be background galaxies. The blue color in the picture, seen most prominently in the central bar, represents starlight from older stars. The chaotic, bright regions outside this bar are filled with hot, massive stars buried in thick blankets of dust. The red clouds contain cooler interstellar gas and molecular-sized dust grains illuminated by ambient starlight. The Large Magellanic Cloud, located 160,000 light-years from Earth, is one of a handful of dwarf galaxies that orbit our own Milky Way. It is approximately one-third as wide as the Milky Way, and, if it could be seen in its entirety, would cover the same amount of sky as a grid of about 480 full moons. About one-third of the whole galaxy can be seen in the Spitzer image. This picture is a composite of infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns is red and orange: 4.5-micron light is green; and 3.6-micron light is blue. http://photojournal.jpl.nasa.gov/catalog/PIA07136
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers
NASA Technical Reports Server (NTRS)
Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino
2012-01-01
Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).
The diverse use of clouds by CMS
Andronis, Anastasios; Bauer, Daniela; Chaze, Olivier; ...
2015-12-23
The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the CMS Tier 1 sites, when it is not running as part of themore » trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources. Lastly, we present a snap shot of this infrastructure and its operation at the time of the CHEP2015 conference.« less
Jupiter's Great Red Spot in True Color
2017-07-27
This image of Jupiter's iconic Great Red Spot (GRS) was created by citizen scientist Björn Jónsson using data from the JunoCam imager on NASA's Juno spacecraft. This true-color image offers a natural color rendition of what the Great Red Spot and surrounding areas would look like to human eyes from Juno's position. The tumultuous atmospheric zones in and around the Great Red Spot are clearly visible. The image was taken on July 10, 2017 at 07:10 p.m. PDT (10:10 p.m. EDT), as the Juno spacecraft performed its seventh close flyby of Jupiter. At the time the image was taken, the spacecraft was about 8,648 miles (13,917 kilometers) from the tops of the clouds of the planet at a latitude of -32.6 degrees. https://photojournal.jpl.nasa.gov/catalog/PIA21775
Creating cloud-free Landsat ETM+ data sets in tropical landscapes: cloud and cloud-shadow removal
Sebastián Martinuzzi; William A. Gould; Olga M. Ramos Gonzalez
2007-01-01
Clouds and cloud shadows are common features of visible and infrared remotelysensed images collected from many parts of the world, particularly in humid and tropical regions. We have developed a simple and semiautomated method to mask clouds and shadows in Landsat ETM+ imagery, and have developed a recent cloud-free composite of multitemporal images for Puerto Rico and...
Hubble Provides Infrared View of Jupiter's Moon, Ring, and Clouds
NASA Technical Reports Server (NTRS)
1997-01-01
Probing Jupiter's atmosphere for the first time, the Hubble Space Telescope's new Near Infrared Camera and Multi-Object Spectrometer (NICMOS) provides a sharp glimpse of the planet's ring, moon, and high-altitude clouds.
The presence of methane in Jupiter's hydrogen- and helium-rich atmosphere has allowed NICMOS to plumb Jupiter's atmosphere, revealing bands of high-altitude clouds. Visible light observations cannot provide a clear view of these high clouds because the underlying clouds reflect so much visible light that the higher level clouds are indistinguishable from the lower layer. The methane gas between the main cloud deck and the high clouds absorbs the reflected infrared light, allowing those clouds that are above most of the atmosphere to appear bright. Scientists will use NICMOS to study the high altitude portion of Jupiter's atmosphere to study clouds at lower levels. They will then analyze those images along with visible light information to compile a clearer picture of the planet's weather. Clouds at different levels tell unique stories. On Earth, for example, ice crystal (cirrus) clouds are found at high altitudes while water (cumulus) clouds are at lower levels.Besides showing details of the planet's high-altitude clouds, NICMOS also provides a clear view of the ring and the moon, Metis. Jupiter's ring plane, seen nearly edge-on, is visible as a faint line on the upper right portion of the NICMOS image. Metis can be seen in the ring plane (the bright circle on the ring's outer edge). The moon is 25 miles wide and about 80,000 miles from Jupiter.Because of the near-infrared camera's narrow field of view, this image is a mosaic constructed from three individual images taken Sept. 17, 1997. The color intensity was adjusted to accentuate the high-altitude clouds. The dark circle on the disk of Jupiter (center of image) is an artifact of the imaging system.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/Introducing two Random Forest based methods for cloud detection in remote sensing images
NASA Astrophysics Data System (ADS)
Ghasemian, Nafiseh; Akhoondzadeh, Mehdi
2018-07-01
Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.
NASA Astrophysics Data System (ADS)
Ye, L.; Wu, B.
2017-09-01
High-resolution imagery is an attractive option for surveying and mapping applications due to the advantages of high quality imaging, short revisit time, and lower cost. Automated reliable and dense image matching is essential for photogrammetric 3D data derivation. Such matching, in urban areas, however, is extremely difficult, owing to the complexity of urban textures and severe occlusion problems on the images caused by tall buildings. Aimed at exploiting high-resolution imagery for 3D urban modelling applications, this paper presents an integrated image matching and segmentation approach for reliable dense matching of high-resolution imagery in urban areas. The approach is based on the framework of our existing self-adaptive triangulation constrained image matching (SATM), but incorporates three novel aspects to tackle the image matching difficulties in urban areas: 1) occlusion filtering based on image segmentation, 2) segment-adaptive similarity correlation to reduce the similarity ambiguity, 3) improved dense matching propagation to provide more reliable matches in urban areas. Experimental analyses were conducted using aerial images of Vaihingen, Germany and high-resolution satellite images in Hong Kong. The photogrammetric point clouds were generated, from which digital surface models (DSMs) were derived. They were compared with the corresponding airborne laser scanning data and the DSMs generated from the Semi-Global matching (SGM) method. The experimental results show that the proposed approach is able to produce dense and reliable matches comparable to SGM in flat areas, while for densely built-up areas, the proposed method performs better than SGM. The proposed method offers an alternative solution for 3D surface reconstruction in urban areas.
NASA Astrophysics Data System (ADS)
Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús
2011-09-01
This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing.
Cole, Brian S; Moore, Jason H
2018-03-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing
Moore, Jason H.
2018-01-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1992-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1991-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
Ellis, Shane R; Soltwisch, Jens; Heeren, Ron M A
2014-05-01
In this study, we describe the implementation of a position- and time-sensitive detection system (Timepix detector) to directly visualize the spatial distributions of the matrix-assisted laser desorption ionization ion cloud in a linear-time-of-flight (MALDI linear-ToF) as it is projected onto the detector surface. These time-resolved images allow direct visualization of m/z-dependent ion focusing effects that occur within the ion source of the instrument. The influence of key parameters, namely extraction voltage (E(V)), pulsed-ion extraction (PIE) delay, and even the matrix-dependent initial ion velocity was investigated and were found to alter the focusing properties of the ion-optical system. Under certain conditions where the spatial focal plane coincides with the detector plane, so-called x-y space focusing could be observed (i.e., the focusing of the ion cloud to a small, well-defined spot on the detector). Such conditions allow for the stigmatic ion imaging of intact proteins for the first time on a commercial linear ToF-MS system. In combination with the ion-optical magnification of the system (~100×), a spatial resolving power of 11–16 μm with a pixel size of 550 nm was recorded within a laser spot diameter of ~125 μm. This study demonstrates both the diagnostic and analytical advantages offered by the Timepix detector in ToF-MS.
NASA Technical Reports Server (NTRS)
Lampton, M.; Malina, R. F.
1976-01-01
A position-sensitive event-counting electronic readout system for microchannel plates (MCPs) is described that offers the advantages of high spatial resolution and fast time resolution. The technique relies upon a four-quadrant electron-collecting anode located behind the output face of the microchannel plate, so that the electron cloud from each detected event is partly intercepted by each of the four quadrants. The relative amounts of charge collected by each quadrant depend on event position, permitting each event to be localized with two ratio circuits. A prototype quadrant anode system for ion, electron, and extreme ultraviolet imaging is described. The spatial resolution achieved, about 10 microns, allows individual MCP channels to be distinguished.
Cloud-Induced Uncertainty for Visual Navigation
2014-12-26
images at the pixel level. The result is a method that can overlay clouds with various structures on top of any desired image to produce realistic...cloud-shaped structures . The primary contribution of this research, however, is to investigate and quantify the errors in features due to clouds. The...of clouds types, this method does not emulate the true structure of clouds. An alternative popular modern method of creating synthetic clouds is known
3D cloud detection and tracking system for solar forecast using multiple sky imagers
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2015-06-23
We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Lidar measurements of boundary layers, aerosol scattering and clouds during project FIFE
NASA Technical Reports Server (NTRS)
Eloranta, Edwin W. (Principal Investigator)
1995-01-01
A detailed account of progress achieved under this grant funding is contained in five journal papers. The titles of these papers are: The calculation of area-averaged vertical profiles of the horizontal wind velocity using volume imaging lidar data; Volume imaging lidar observation of the convective structure surrounding the flight path of an instrumented aircraft; Convective boundary layer mean depths, cloud base altitudes, cloud top altitudes, cloud coverages, and cloud shadows obtained from Volume Imaging Lidar data; An accuracy analysis of the wind profiles calculated from Volume Imaging Lidar data; and Calculation of divergence and vertical motion from volume-imaging lidar data. Copies of these papers form the body of this report.
NASA Technical Reports Server (NTRS)
2002-01-01
The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team
Biomedical image analysis and processing in clouds
NASA Astrophysics Data System (ADS)
Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John
2013-10-01
Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.
NASA Astrophysics Data System (ADS)
Sus, Oliver; Stengel, Martin; Stapelberg, Stefan; McGarragh, Gregory; Poulsen, Caroline; Povey, Adam C.; Schlundt, Cornelia; Thomas, Gareth; Christensen, Matthew; Proud, Simon; Jerg, Matthias; Grainger, Roy; Hollmann, Rainer
2018-06-01
We present here the key features of the Community Cloud retrieval for CLimate (CC4CL) processing algorithm. We focus on the novel features of the framework: the optimal estimation approach in general, explicit uncertainty quantification through rigorous propagation of all known error sources into the final product, and the consistency of our long-term, multi-platform time series provided at various resolutions, from 0.5 to 0.02°. By describing all key input data and processing steps, we aim to inform the user about important features of this new retrieval framework and its potential applicability to climate studies. We provide an overview of the retrieved and derived output variables. These are analysed for four, partly very challenging, scenes collocated with CALIOP (Cloud-Aerosol lidar with Orthogonal Polarization) observations in the high latitudes and over the Gulf of Guinea-West Africa. The results show that CC4CL provides very realistic estimates of cloud top height and cover for optically thick clouds but, where optically thin clouds overlap, returns a height between the two layers. CC4CL is a unique, coherent, multi-instrument cloud property retrieval framework applicable to passive sensor data of several EO missions. Through its flexibility, CC4CL offers the opportunity for combining a variety of historic and current EO missions into one dataset, which, compared to single sensor retrievals, is improved in terms of accuracy and temporal sampling.
Remote sensing image segmentation based on Hadoop cloud platform
NASA Astrophysics Data System (ADS)
Li, Jie; Zhu, Lingling; Cao, Fubin
2018-01-01
To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.
Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories
NASA Astrophysics Data System (ADS)
AlMomani, Abd AlRahman R.; Bollt, Erik
2018-06-01
Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.
Digital all-sky polarization imaging of partly cloudy skies.
Pust, Nathan J; Shaw, Joseph A
2008-12-01
Clouds reduce the degree of linear polarization (DOLP) of skylight relative to that of a clear sky. Even thin subvisual clouds in the "twilight zone" between clouds and aerosols produce a drop in skylight DOLP long before clouds become visible in the sky. In contrast, the angle of polarization (AOP) of light scattered by a cloud in a partly cloudy sky remains the same as in the clear sky for most cases. In unique instances, though, select clouds display AOP signatures that are oriented 90 degrees from the clear-sky AOP. For these clouds, scattered light oriented parallel to the scattering plane dominates the perpendicularly polarized Rayleigh-scattered light between the instrument and the cloud. For liquid clouds, this effect may assist cloud particle size identification because it occurs only over a relatively limited range of particle radii that will scatter parallel polarized light. Images are shown from a digital all-sky-polarization imager to illustrate these effects. Images are also shown that provide validation of previously published theories for weak (approximately 2%) polarization parallel to the scattering plane for a 22 degrees halo.
Cloud Imagers Offer New Details on Earth's Health
NASA Technical Reports Server (NTRS)
2009-01-01
A stunning red sunset or purple sunrise is an aesthetic treat with a scientific explanation: The colors are a direct result of the absorption or reflectance of solar radiation by atmospheric aerosols, minute particles (either solid or liquid) in the Earth s atmosphere that occur both naturally and because of human activity. At the beginning or end of the day, the Sun s rays travel farther through the atmosphere to reach an observer s eyes and more green and yellow light is scattered, making the Sun appear red. Sunset and sunrise are especially colorful when the concentration of atmospheric particles is high. This ability of aerosols to absorb and reflect sunlight is not just pretty; it also determines the amount of radiation and heat that reaches the Earth s surface, and can profoundly affect climate. In the atmosphere, aerosols are also important as nuclei for the condensation of water droplets and ice crystals. Clouds with fewer aerosols cannot form as many water droplets (called cloud particles), and consequently, do not scatter light well. In this case, more sunlight reaches the Earth s surface. When aerosol levels in clouds are high, however, more nucleation points can form small liquid water droplets. These smaller cloud particles can reflect up to 90 percent of visible radiation to space, keeping the heat from ever reaching Earth s surface. The tendency for these particles to absorb or reflect the Sun s energy - called extinction by astronomers - depends on a number of factors, including chemical composition and the humidity and temperature in the surrounding air; because cloud particles are so small, they are affected quickly by minute changes in the atmosphere. Because of this sensitivity, atmospheric scientists study cloud particles to anticipate patterns and shifts in climate. Until recently, NASA s study of atmospheric aerosols and cloud particles has been focused primarily on satellite images, which, while granting large-scale atmospheric analysis, limited scientists ability to acquire detailed information about individual particles. Now, experiments with specialized equipment can be flown on standard jets, making it possible for researchers to monitor and more accurately anticipate changes in Earth s atmosphere and weather patterns.
Clouds above the Martin Limb: Viking observations
NASA Technical Reports Server (NTRS)
Martin, L. J.; Baum, W. A.; Wasserman, L. H.; Kreidl, T. J.
1984-01-01
Whenever Viking Orbiter images included the limb of Mars, they recorded one or more layers of clouds above the limb. The height above the limb and the brightness (reflectivity) of these clouds were determined in a selected group of these images. Normalized individual brightness profiles of three separate traverses across the limb of each image are shown. The most notable finding is that some of these clouds can be very high. Many reach heights of over 60 km, and several are over 70 km above the limb. Statistically, the reflectivity of the clouds increases with phase angle. Reflectivity and height both appear to vary with season, but the selected images spanned only one Martian year, so the role of seasons cannot be isolated. Limb clouds in red-filter images tend to be brighter than violet-filter images, but both season and phase appear to be more dominant factors. Due to the limited sample available, the possible influences of latitude and longitude are less clear. The layering of these clouds ranges from a single layer to five or more layers. Reflectivity gradients range from smooth and gentle to steep and irregular.
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Xiu; Wang, Yueyun
2018-03-01
Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
CloudSat Overflight of Hurricane Bud
2006-07-13
The image at the top of figure 1 is from a geostationary imager. The colors relate to the temperature of the clouds. The higher the clouds, the lower the temperature. The highest, coldest clouds are located near the center of the hurricane.
NASA Astrophysics Data System (ADS)
Smith, William L., Jr.
The threat for aircraft icing in clouds is a significant hazard that routinely impacts aviation operations. Accurate diagnoses and forecasts of aircraft icing conditions requires identifying the location and vertical distribution of clouds with super-cooled liquid water (SLW) droplets, as well as the characteristics of the droplet size distribution. Traditional forecasting methods rely on guidance from numerical models and conventional observations, neither of which currently resolve cloud properties adequately on the optimal scales needed for aviation. Satellite imagers provide measurements over large areas with high spatial resolution that can be interpreted to identify the locations and characteristics of clouds, including features associated with adverse weather and storms. This thesis develops new techniques for interpreting cloud products derived from satellite data to infer the flight icing threat to aircraft in a wide range of cloud conditions. For unobscured low clouds, the icing threat is determined using empirical relationships developed from correlations between satellite imager retrievals of liquid water path and droplet size with icing conditions reported by pilots (PIREPS). For deep ice over water cloud systems, ice and liquid water content profiles are derived by using the imager cloud properties to constrain climatological information on cloud vertical structure and water phase obtained apriori from radar and lidar observations, and from cloud model analyses. Retrievals of the SLW content embedded within overlapping clouds are mapped to the icing threat using guidance from an airfoil modeling study. Compared to PIREPS, the satellite icing detection and intensity accuracies are found to be about 90% and 70%, respectively. Mean differences between the imager IWC retrievals with those from CloudSat and Calipso are less than 30%. This level of closure in the cloud water budget can only be achieved by correcting for errors in the imager retrievals due to the simplifying but poor assumption that deep optically thick clouds are single-phase and vertically homogeneous. When applied to geostationary satellite data, the profiling method provides a real-time characterization of clouds in 4-D. This research should improve the utility of satellite imager data for quantitatively diagnosing and predicting clouds and their effects in weather and climate applications.
A method for quantifying cloud immersion in a tropical mountain forest using time-lapse photography
Bassiouni, Maoya; Scholl, Martha A.; Torres-Sanchez, Angel J.; Murphy, Sheila F.
2017-01-01
Quantifying the frequency, duration, and elevation range of fog or cloud immersion is essential to estimate cloud water deposition in water budgets and to understand the ecohydrology of cloud forests. The goal of this study was to develop a low-cost and high spatial-coverage method to detect occurrence of cloud immersion within a mountain cloud forest by using time-lapse photography. Trail cameras and temperature/relative humidity sensors were deployed at five sites covering the elevation range from the assumed lifting condensation level to the mountain peaks in the Luquillo Mountains of Puerto Rico. Cloud-sensitive image characteristics (contrast, the coefficient of variation and the entropy of pixel luminance, and image colorfulness) were used with a k-means clustering approach to accurately detect cloud-immersed conditions in a time series of images from March 2014 to May 2016. Images provided hydrologically meaningful cloud-immersion information while temperature-relative humidity data were used to refine the image analysis using dew point information and provided temperature gradients along the elevation transect. Validation of the image processing method with human-judgment based classification generally indicated greater than 90% accuracy. Cloud-immersion frequency averaged 80% at sites above 900 m during nighttime hours and 49% during daytime hours, and was consistent with diurnal patterns of cloud immersion measured in a previous study. Results for the 617 m site demonstrated that cloud immersion in the Luquillo Mountains rarely occurs at the previously-reported cloud base elevation of about 600 m (11% during nighttime hours and 5% during daytime hours). The framework presented in this paper will be used to monitor at a low cost and high spatial resolution the long-term variability of cloud-immersion patterns in the Luquillo Mountains, and can be applied to ecohydrology research at other cloud-forest sites or in coastal ecosystems with advective sea fog.
NASA Astrophysics Data System (ADS)
Champion, N.
2012-08-01
Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
NASA Astrophysics Data System (ADS)
Lawson, P.; Stamnes, K.; Stamnes, J.; Zmarzly, P.; O'Connor, D.; Koskulics, J.; Hamre, B.
2008-12-01
A tethered balloon system specifically designed to collect microphysical data in mixed-phase clouds was deployed in Arctic stratus clouds during May 2008 near Ny-Alesund, Svalbard, at 79 degrees North Latitude. This is the first time a tethered balloon system with a cloud particle imager (CPI) that records high-resolution digital images of cloud drops and ice particles has been operated in cloud. The custom tether supplies electrical power to the instrument package, which in addition to the CPI houses a 4-pi short-wavelength radiometer and a met package that measures temperature, humidity, pressure, GPS position, wind speed and direction. The instrument package was profiled vertically through cloud up to altitudes of 1.6 km. Since power was supplied to the instrument package from the ground, it was possible to keep the balloon package aloft for extended periods of time, up to 9 hours at Ny- Ålesund, which was limited only by crew fatigue. CPI images of cloud drops and the sizes, shapes and degree of riming of ice particles are shown throughout vertical profiles of Arctic stratus clouds. The images show large regions of mixed-phase cloud from -8 to -2 C. The predominant ice crystal habits in these regions are needles and aggregates of needles. The amount of ice in the mixed-phase clouds varied considerably and did not appear to be a function of temperature. On some occasions, ice was observed near cloud base at -2 C with supercooled cloud above to - 8 C that was devoid of ice. Measurements of shortwave radiation are also presented. Correlations between particle distributions and radiative measurements will be analyzed to determine the effect of these Arctic stratus clouds on radiative forcing.
Cloud cameras at the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Winnick, Michael G.
2010-06-01
This thesis presents the results of measurements made by infrared cloud cameras installed at the Pierre Auger Observatory in Argentina. These cameras were used to record cloud conditions during operation of the observatory's fluorescence detectors. As cloud may affect the measurement of fluorescence from cosmic ray extensive air showers, the cloud cameras provide a record of which measurements have been interfered with by cloud. Several image processing algorithms were developed, along with a methodology for the detection of cloud within infrared images taken by the cloud cameras. A graphical user interface (GUI) was developed to expediate this, as a large number of images need to be checked for cloud. A cross-check between images recorded by three of the observatory's cloud cameras is presented, along with a comparison with independent cloud measurements made by LIDAR. Despite the cloud cameras and LIDAR observing different areas of the sky, a good agreement is observed in the measured cloud fraction between the two instruments, particularly on very clear and overcast nights. Cloud information recorded by the cloud cameras, with cloud height information measured by the LIDAR, was used to identify those extensive air showers that were obscured by cloud. These events were used to study the effectiveness of standard quality cuts at removing cloud afflicted events. Of all of the standard quality cuts studied in this thesis, the LIDAR cloud fraction cut was the most effective at preferentially removing cloud obscured events. A 'cloudy pixel' veto is also presented, whereby cloud obscured measurements are excluded during the standard hybrid analysis, and new extensive air shower reconstructed parameters determined. The application of such a veto would provide a slight increase to the number of events available for higher level analysis.
A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.
Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo
2015-01-01
The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.
Atmospheric Science Data Center
2013-04-22
article title: MISR Mystery Image Quiz #21: Actinoform Clouds ... This mystery concerns a particular type of cloud, one example of which was imaged by the Multi-angle Imaging SpectroRadiometer (MISR) ... ) These clouds are commonly tracked using propeller-driven research aircraft. Answer: C is True. The weather satellite, TIROS ...
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
On the Cloud Observations in JAXA's Next Coming Satellite Missions
NASA Technical Reports Server (NTRS)
Nakajima, Takashi Y.; Nagao, Takashi M.; Letu, Husi; Ishida, Haruma; Suzuki, Kentaroh
2012-01-01
The use of JAXA's next generation satellites, the EarthCARE and the GCOM-C, for observing overall cloud systems on the Earth is discussed. The satellites will be launched in the middle of 2010-era and contribute for observing aerosols and clouds in terms of climate change, environment, weather forecasting, and cloud revolution process study. This paper describes the role of such satellites and how to use the observing data showing concepts and some sample viewgraphs. Synergistic use of sensors is a key of the study. Visible to infrared bands are used for cloudy and clear discriminating from passively obtained satellite images. Cloud properties such as the cloud optical thickness, the effective particle radii, and the cloud top temperature will be retrieved from visible to infrared wavelengths of imagers. Additionally, we are going to combine cloud properties obtained from passive imagers and radar reflectivities obtained from an active radar in order to improve our understanding of cloud evolution process. This is one of the new techniques of satellite data analysis in terms of cloud sciences in the next decade. Since the climate change and cloud process study have mutual beneficial relationship, a multispectral wide-swath imagers like the GCOM-C SGLI and a comprehensive observation package of cloud and aerosol like the EarthCARE are both necessary.
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2014-01-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373
Practical implementation of tetrahedral mesh reconstruction in emission tomography
NASA Astrophysics Data System (ADS)
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service
Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha
2017-01-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169
Automated detection of Martian water ice clouds: the Valles Marineris
NASA Astrophysics Data System (ADS)
Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu
2016-10-01
We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds
RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.
Varghese, Blesson; Patel, Ishan; Barker, Adam
2015-01-01
Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.
a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery
NASA Astrophysics Data System (ADS)
Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.
2018-04-01
Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.
Antarctica Cloud Cover for October 2003 from GLAS Satellite Lidar Profiling
NASA Technical Reports Server (NTRS)
Spinhirne, J. D.; Palm, S. P.; Hart, W. D.
2005-01-01
Seeing clouds in polar regions has been a problem for the imagers used on satellites. Both clouds and snow and ice are white, which makes clouds over snow hard to see. And for thermal infrared imaging both the surface and the clouds cold. The Geoscience Laser Altimeter System (GLAS) launched in 2003 gives an entirely new way to see clouds from space. Pulses of laser light scatter from clouds giving a signal that is separated in time from the signal from the surface. The scattering from clouds is thus a sensitive and direct measure of the presence and height of clouds. The GLAS instrument orbits over Antarctica 16 times a day. All of the cloud observations for October 2003 were summarized and compared to the results from the MODIS imager for the same month. There are two basic cloud types that are observed, low stratus with tops below 3 km and high cirrus form clouds with cloud top altitude and thickness tending at 12 km and 1.3 km respectively. The average cloud cover varies from over 93 % for ocean and coastal regions to an average of 40% over the East Antarctic plateau and 60-90% over West Antarctica. When the GLAS monthly average cloud fractions are compared to the MODIS cloud fraction data product, differences in the amount of cloud cover are as much as 40% over the continent. The results will be used to improve the way clouds are detected from the imager observations. These measurements give a much improved understanding of distribution of clouds over Antarctica and may show how they are changing as a result of global warming.
2016-10-18
Pluto's present, hazy atmosphere is almost entirely free of clouds, though scientists from NASA's New Horizons mission have identified some cloud candidates after examining images taken by the New Horizons Long Range Reconnaissance Imager and Multispectral Visible Imaging Camera, during the spacecraft's July 2015 flight through the Pluto system. All are low-lying, isolated small features -- no broad cloud decks or fields -- and while none of the features can be confirmed with stereo imaging, scientists say they are suggestive of possible, rare condensation clouds. http://photojournal.jpl.nasa.gov/catalog/PIA21127
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Cloud Computing for radiologists
Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit
2012-01-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560
HoloGondel: in situ cloud observations on a cable car in the Swiss Alps using a holographic imager
NASA Astrophysics Data System (ADS)
Beck, Alexander; Henneberger, Jan; Schöpfer, Sarah; Fugal, Jacob; Lohmann, Ulrike
2017-02-01
In situ observations of cloud properties in complex alpine terrain where research aircraft cannot sample are commonly conducted at mountain-top research stations and limited to single-point measurements. The HoloGondel platform overcomes this limitation by using a cable car to obtain vertical profiles of the microphysical and meteorological cloud parameters. The main component of the HoloGondel platform is the HOLographic Imager for Microscopic Objects (HOLIMO 3G), which uses digital in-line holography to image cloud particles. Based on two-dimensional images the microphysical cloud parameters for the size range from small cloud particles to large precipitation particles are obtained for the liquid and ice phase. The low traveling velocity of a cable car on the order of 10 m s-1 allows measurements with high spatial resolution; however, at the same time it leads to an unstable air speed towards the HoloGondel platform. Holographic cloud imagers, which have a sample volume that is independent of the air speed, are therefore well suited for measurements on a cable car. Example measurements of the vertical profiles observed in a liquid cloud and a mixed-phase cloud at the Eggishorn in the Swiss Alps in the winters 2015 and 2016 are presented. The HoloGondel platform reliably observes cloud droplets larger than 6.5 µm, partitions between cloud droplets and ice crystals for a size larger than 25 µm and obtains a statistically significantly size distribution for every 5 m in vertical ascent.
Cloud Infrastructures for In Silico Drug Discovery: Economic and Practical Aspects
Clematis, Andrea; Quarati, Alfonso; Cesini, Daniele; Milanesi, Luciano; Merelli, Ivan
2013-01-01
Cloud computing opens new perspectives for small-medium biotechnology laboratories that need to perform bioinformatics analysis in a flexible and effective way. This seems particularly true for hybrid clouds that couple the scalability offered by general-purpose public clouds with the greater control and ad hoc customizations supplied by the private ones. A hybrid cloud broker, acting as an intermediary between users and public providers, can support customers in the selection of the most suitable offers, optionally adding the provisioning of dedicated services with higher levels of quality. This paper analyses some economic and practical aspects of exploiting cloud computing in a real research scenario for the in silico drug discovery in terms of requirements, costs, and computational load based on the number of expected users. In particular, our work is aimed at supporting both the researchers and the cloud broker delivering an IaaS cloud infrastructure for biotechnology laboratories exposing different levels of nonfunctional requirements. PMID:24106693
2017-12-08
A vigorous summer fire season continued through July, 2013 as many large wildfires continued to burn in the forests of northern Canada. The high fire activity not only laid waste to thousands of hectares of boreal forest, but sent thick smoke billowing high into the atmosphere, where it was carried far across the Atlantic Ocean. On July 30, the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA’s Aqua satellite captured this true-color image of a river of smoke spreading south across the Hudson Bay. The blue background is formed by the waters of Hudson Bay. In the southeast the green, forest-covered land of Quebec province peeks from under a large cloud bank. Another large bank of white cloud covers the water in the southwest, and a smaller cloud bank covers the territory of Nunavut in the northwest. A bit of Baffin Island can be seen near the top center of the image. Looking closely at the image, it appears that the gray smoke mixes with whiter cloud in the south, suggesting they may be at the same level in the atmosphere. In the northeast corner of the image, a ribbon of smoke appears to blow over a bank of popcorn clouds as well as over a few lower-lying clouds, causing some of the clouds to appear gray beneath the smoky veil. Where cloud meets smoke in the northeast, however, the line of the cloud bank remains sharp, while the smoke appears to continue traveling under the edge. Although these interpretations are somewhat subjective in this true-color image, the false-color image of the same scene (not shown here) lends strength to the interpretation. Data from other NASA instruments, designed to measure cloud height and characteristics, agree that clouds vary in height, and that smoke mingles with cloud in the south. Credit: NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Lantz, K. O.; Long, C. S.; Buller, D.; Berwick, M.; Buller, M.; Kane, I.; Shane, J.
2012-12-01
The UV Index (UVI) is a measure of the skin-damaging UV radiation levels at the Earth's surface. Clouds, haze, air pollution, total ozone, surface elevation, and ground reflectivity affect the levels of UV radiation reaching the ground. The global UV Index was developed as a simple tool to educate the public for taking precautions when exposed to UV radiation to avoid sun-burning, which has been linked to the development of skin cancer. The purpose of this study was to validate an algorithm to modify a cloud-free UV Index forecast for cloud conditions as observed by adults in real-time. The cloud attenuation algorithm is used in a smart-phone application to modify a clear-sky UV Index forecast. In the United States, the Climate Prediction Center of the National Oceanic and Atmospheric Administration's (NOAA) issues a daily UV Index Forecast. The NOAA UV Index is an hourly forecast for a 0.5 x 0.5 degree area and thus has a degree of uncertainty. Cloud cover varies temporally and spatially over short times and distances as weather conditions change and can have a large impact on the UV radiation. The smart-phone application uses the cloud-based UV Index forecast as the default but allows the user to modify a cloud-free UV Index forecast when the predicted sky conditions do not match observed conditions. Eighty four (n=84) adults were recruited to participate in the study through advertisements posted online and in a university e-newsletter. Adults were screened for eligibility (i.e., 18 or older, capable to traveling to test site, had a smart phone with a data plan to access online observation form). A sky observation measure was created to assess cloud fraction. The adult volunteers selected from among four photographs the image that best matched the cloud conditions they observed. Images depicted no clouds (clear sky), thin high clouds, partly cloudy sky, and thick clouds (sky completely overcast). When thin high clouds or partly cloudy images were selected, adults estimated the percentage of the sky covered by clouds. Cloud fraction was calculated by assigning 0% if the clear-sky image was selected, 100% if the overcast thick cloud image was selected, and 10% to 90% as indicated by adults, if high thin clouds or partly cloudy images were selected. The observed cloud fraction from the adult volunteers was compared to the cloud fraction determined by a Total Sky Imager. A cloud modification factor based on the observed cloud fraction was applied to the cloud-free UV Index forecast. This result was compared to the NOAA cloudy sky UV Index forecast and to the concurrent UV Index measurements from three broadband UV radiometers and a Brewer spectrophotometer calibrated using NIST traceable standards.
Smoke From Canadian Wildfires Trapped in Clouds
2017-12-08
NASA's Aqua satellite captured this image of the clouds over Canada. Entwined within the clouds is the smoke billowing up from the wildfires that are currently burning across a large expanse of the country. The smoke has become entrained within the clouds causing it to twist within the circular motion of the clouds and wind. This image was taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on the Aqua satellite on May 9, 2016. Image Credit: NASA image courtesy Jeff Schmaltz LANCE/EOSDIS MODIS Rapid Response Team, GSFC NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Coddington, Odele; Platnick, Steven; Pilewskie, Peter; Schmidt, Sebastian
2016-04-01
The NASA Pre-Aerosol, Cloud and ocean Ecosystem (PACE) Science Definition Team (SDT) report released in 2012 defined imager stability requirements for the Ocean Color Instrument (OCI) at the sub-percent level. While the instrument suite and measurement requirements are currently being determined, the PACE SDT report provided details on imager options and spectral specifications. The options for a threshold instrument included a hyperspectral imager from 350-800 nm, two near-infrared (NIR) channels, and three short wave infrared (SWIR) channels at 1240, 1640, and 2130 nm. Other instrument options include a variation of the threshold instrument with 3 additional spectral channels at 940, 1378, and 2250 nm and the inclusion of a spectral polarimeter. In this work, we present cloud retrieval information content studies of optical thickness, droplet effective radius, and thermodynamic phase to quantify the potential for continuing the low cloud climate data record established by the MOderate Resolution and Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) missions with the PACE OCI instrument (i.e., non-polarized cloud reflectances and in the absence of midwave and longwave infrared channels). The information content analysis is performed using the GEneralized Nonlinear Retrieval Analysis (GENRA) methodology and the Collection 6 simulated cloud reflectance data for the common MODIS/VIIRS algorithm (MODAWG) for Cloud Mask, Cloud-Top, and Optical Properties. We show that using both channels near 2 microns improves the probability of cloud phase discrimination with shortwave-only cloud reflectance retrievals. Ongoing work will extend the information content analysis, currently performed for dark ocean surfaces, to different land surface types.
Cloud computing applications for biomedical science: A perspective.
Navale, Vivek; Bourne, Philip E
2018-06-01
Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.
Cloud computing applications for biomedical science: A perspective
2018-01-01
Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176
ERIC Educational Resources Information Center
Johnson, Doug
2010-01-01
Web-based applications offer teachers, students, and school districts a convenient way to accomplish a wide range of tasks, from accounting to word processing, for free. Cloud computing has the potential to offer staff and students better services at a lower cost than the technology deployment models they're using now. Saving money and improving…
1997-09-08
Scientists have spotted what appear to be thunderheads on Jupiter bright white cumulus clouds similar to those that bring thunderstorms on Earth - at the outer edges of Jupiter's Great Red Spot. Images from NASA's Galileo spacecraft now in orbit around Jupiter are providing new evidence that thunderstorms may be an important source of energy for Jupiter's winds that blow at more than 500 kilometers per hour (about 300 miles per hour). The photos were taken by Galileo's solid state imager camera on June 26, 1996 at a range of about 1.4 million kilometers (about 860,000 miles). The image at top is a mosaic of multiple images taken through near-infrared filters. False coloring in the image reveals cloud-top heights. High, thick clouds are white and high, thin clouds are pink. Low-altitude clouds are blue. The two black-and-white images at bottom are enlargements of the boxed area; the one on the right was taken 70 minutes after the image on the left. The arrows show where clouds have formed or dissipated in the short time between the images. The smallest clouds are tens of kilometers across. On Earth, moist convection in thunderstorms is a pathway through which solar energy, deposited at the surface, is transported and delivered to the atmosphere. Scientists at the California Institute of Technology analyzing data from Galileo believe that water, the most likely candidate for what composes these clouds on Jupiter, may be more abundant at the site seen here than at the Galileo Probe entry site, which was found to be unexpectedly dry. http://photojournal.jpl.nasa.gov/catalog/PIA00506
NASA Astrophysics Data System (ADS)
Spanò, A.; Chiabrando, F.; Sammartano, G.; Teppati Losè, L.
2018-05-01
The paper focuses on the exploration of the suitability and the discretization of applicability issues about advanced surveying integrated techniques, mainly based on image-based approaches compared and integrated to range-based ones that have been developed with the use of the cutting-edge solutions tested on field. The investigated techniques integrate both technological devices for 3D data acquisition and thus editing and management systems to handle metric models and multi-dimensional data in a geospatial perspective, in order to innovate and speed up the extraction of information during the archaeological excavation activities. These factors, have been experienced in the outstanding site of the Hierapolis of Phrygia ancient city (Turkey), downstream the 2017 surveying missions, in order to produce high-scale metric deliverables in terms of high-detailed Digital Surface Models (DSM), 3D continuous surface models and high-resolution orthoimages products. In particular, the potentialities in the use of UAV platforms for low altitude acquisitions in aerial photogrammetric approach, together with terrestrial panoramic acquisitions (Trimble V10 imaging rover), have been investigated with a comparison toward consolidated Terrestrial Laser Scanning (TLS) measurements. One of the main purposes of the paper is to evaluate the results offered by the technologies used independently and using integrated approaches. A section of the study in fact, is specifically dedicated to experimenting the union of different sensor dense clouds: both dense clouds derived from UAV have been integrated with terrestrial Lidar clouds, to evaluate their fusion. Different test cases have been considered, representing typical situations that can be encountered in archaeological sites.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
1999-12-01
A panoramic view of a vast, sculpted area of gas and dust where thousands of stars are being born has been captured by NASA's Hubble Space Telescope. The image, taken by Hubble's Wide Field and Planetary Camera 2, is online at http://hubblesite.org/newscenter/archive/releases/2001/21/image/a/. The camera was designed and built by NASA's Jet Propulsion Laboratory, Pasadena, Calif. The photo offers an unprecedented, detailed view of the entire inner region of the fertile, star-forming 30 Doradus Nebula. The mosaic picture shows that ultraviolet radiation and high-speed material unleashed by the stars in the cluster, called R136 (the large blue blob left of center), are weaving a tapestry of creation and destruction, triggering the collapse of looming gas and dust clouds and forming pillar-like structures that incubate newborn stars. The 30 Doradus Nebula is in the Large Magellanic Cloud, a satellite galaxy of the Milky Way located 170,000 light-years from Earth. Nebulas like 30 Doradus are signposts of recent star birth. High-energy ultraviolet radiation from young, hot, massive stars in R136 causes surrounding gaseous material to glow. Previous Hubble telescope observations showed that R136 contains several dozen of the most massive stars known, each about 100 times the mass of the Sun and about 10 times as hot. These stellar behemoths formed about 2 million years ago. The stars in R136 produce intense "stellar winds," streams of material traveling at several million miles an hour. These winds push the gas away from the cluster and compress the inner regions of the surrounding gas and dust clouds (seen in the image as the pinkish material). The intense pressure triggers the collapse of parts of the clouds, producing a new star formation around the central cluster. Most stars in the nursery are not visible because they are still encased in cocoons of gas and dust. This mosaic image of 30 Doradus consists of five overlapping pictures taken between January 1994 and September 2000 by the Wide Field and Planetary Camera 2. Several color filters enhance important details in the stars and the nebula. Blue corresponds to the hot stars. The greenish color denotes hot gas energized by the central cluster of stars. Pink depicts the glowing edges of the gas and dust clouds facing the cluster, which are being bombarded by winds and radiation. Reddish-brown represents the cooler surfaces of the clouds, which are not receiving direct radiation from the central cluster. http://photojournal.jpl.nasa.gov/catalog/PIA04200
Jupiter's Colorful Cloud Belts
2018-01-12
Colorful swirling cloud belts dominate Jupiter's southern hemisphere in this image captured by NASA's Juno spacecraft. Jupiter appears in this color-enhanced image as a tapestry of vibrant cloud bands and storms. The dark region in the far left is called the South Temperate Belt. Intersecting the belt is a ghost-like feature of slithering white clouds. This is the largest feature in Jupiter's low latitudes that's a cyclone (rotating with clockwise motion). This image was taken on Dec. 16, 2017 at 10:12 PST (1:12 p.m. EST), as Juno performed its tenth close flyby of Jupiter. At the time the image was taken, the spacecraft was about 8,453 miles (13,604 kilometers) from the tops of the clouds of the planet at a latitude of 27.9 degrees south. The spatial scale in this image is 5.6 miles/pixel (9.1 kilometers/pixel). Citizen scientist Kevin M. Gill processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21974
Probing Storm Activity on Jupiter
NASA Technical Reports Server (NTRS)
2007-01-01
Scientists assume Jupiter's clouds are composed primarily of ammonia, but only about 1% of the cloud area displays the characteristic spectral fingerprint of ammonia. This composite of infrared images taken by the New Horizons Linear Etalon Infrared Spectral Imager (LEISA) captures several eruptions of this relatively rare breed of ammonia cloud and follows the evolution of the clouds over two Jovian days. (One day on Jupiter is approximately 10 hours, which is how long it takes Jupiter to make one complete rotation about its axis.) The New Horizons spacecraft was still closing in on the giant planet when it made these observations: Jupiter was 3.4 million kilometers (2.1 million miles) from the New Horizons spacecraft for the LEISA image taken at 19:35 Universal Time on February 26, 2007, and the distance decreased to 2.5 million kilometers (1.6 million miles) for the last image shown. LEISA's spatial resolution scale varied from approximately 210 kilometers (130 miles) for the first image to 160 kilometers (100 miles) for the last one. New Horizons scientists originally targeted the region slightly northwest (up and to the left) of the Great Red Spot to search for these special ammonia clouds because that's where they were most easily seen during infrared spectral observations made by the Galileo spacecraft. But unlike the churning, turbulent cloud structures seen near the Great Red Spot during the Galileo era, this region has been quieting down during the past several months and was unusually tranquil when New Horizons passed by. Nevertheless, LEISA managed to find other regions of fresh, upwelling ammonia clouds, and the temporal evolution of one such region is displayed in this figure. In the first image, a fresh ammonia cloud (the blue region) sprouts from between white clouds and a dark elongated region. This blue cloud subsequently stretches along the white-dark border in the next two images. These fresh ammonia clouds trace the strong upwelling of gases from the largely hidden depths of Jupiter to higher altitudes. Presumably, water is also being dragged up from below, and the subsequent condensation of that water, which is far more abundant than ammonia in Jupiter's atmosphere, into cloud droplets energizes the lower troposphere. LEISA produces images at infrared wavelengths, which is heat radiation that cannot be sensed by the human eye. These 'false color' images were produced by putting images of Jupiter at wavelengths of 1.99 micrometers, 1.94 micrometers and 2.04 micrometers into the red, green and blue channels, respectively, of the image display. Ammonia has an absorption feature at 1.99 microns, and when the colors are combined in this way the fresh ammonia clouds take on a bluish hue.2017-11-16
This color-enhanced image of a massive, raging storm in Jupiter's northern hemisphere was captured by NASA's Juno spacecraft during its ninth close flyby of the gas giant planet. The image was taken on Oct. 24, 2017 at 10:32 a.m. PDT (1:32 p.m. EDT). At the time the image was taken, the spacecraft was about 6,281 miles (10,108 kilometers) from the tops of the clouds of Jupiter at a latitude of 41.84 degrees. The spatial scale in this image is 4.2 miles/pixel (6.7 kilometers/pixel). The storm is rotating counter-clockwise with a wide range of cloud altitudes. The darker clouds are expected to be deeper in the atmosphere than the brightest clouds. Within some of the bright "arms" of this storm, smaller clouds and banks of clouds can be seen, some of which are casting shadows to the right side of this picture (sunlight is coming from the left). The bright clouds and their shadows range from approximately 4 to 8 miles (7 to 12 kilometers) in both widths and lengths. These appear similar to the small clouds in other bright regions Juno has detected and are expected to be updrafts of ammonia ice crystals possibly mixed with water ice. Citizen scientists Gerald Eichstädt and Seán Doran processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21971
NASA Technical Reports Server (NTRS)
1997-01-01
Clouds and hazes at various altitudes within the dynamic Jovian atmosphere are revealed by multi-color imaging taken by the Near-Infrared Mapping Spectrometer (NIMS) onboard the Galileo spacecraft. These images were taken during the second orbit (G2) on September 5, 1996 from an early-morning vantage point 2.1 million kilometers (1.3 million miles) above Jupiter. They show the planet's appearance as viewed at various near-infrared wavelengths, with distinct differences due primarily to variations in the altitudes and opacities of the cloud systems. The top left and right images, taken at 1.61 microns and 2.73 microns respectively, show relatively clear views of the deep atmosphere, with clouds down to a level about three times the atmospheric pressure at the Earth's surface.
By contrast, the middle image in top row, taken at 2.17 microns, shows only the highest altitude clouds and hazes. This wavelength is severely affected by the absorption of light by hydrogen gas, the main constituent of Jupiter's atmosphere. Therefore, only the Great Red Spot, the highest equatorial clouds, a small feature at mid-northern latitudes, and thin, high photochemical polar hazes can be seen. In the lower left image, at 3.01 microns, deeper clouds can be seen dimly against gaseous ammonia and methane absorption. In the lower middle image, at 4.99 microns, the light observed is the planet's own indigenous heat from the deep, warm atmosphere.The false color image (lower right) succinctly shows various cloud and haze levels seen in the Jovian atmosphere. This image indicates the temperature and altitude at which the light being observed is produced. Thermally-rich red areas denote high temperatures from photons in the deep atmosphere leaking through minimal cloud cover; green denotes cool temperatures of the tropospheric clouds; blue denotes cold of the upper troposphere and lower stratosphere. The polar regions appear purplish, because small-particle hazes allow leakage and reflectivity, while yellowish regions at temperate latitudes may indicate tropospheric clouds with small particles which also allow leakage. A mix of high and low-altitude aerosols causes the aqua appearance of the Great Red Spot and equatorial region.The Jet Propulsion Laboratory manages the Galileo mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web Galileo mission home page at http://galileo.jpl.nasa.gov.Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Fei; Zhen, Zhao; Liu, Chun
Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less
Wang, Fei; Zhen, Zhao; Liu, Chun; ...
2017-12-18
Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
Marine Layer Clouds off the California Coast
2017-12-08
NASA image acquired September 27, 2012 On September 27, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured this nighttime view of low-lying marine layer clouds along the coast of California. The image was captured by the VIIRS “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe signals such as gas flares, auroras, wildfires, city lights, and reflected moonlight. An irregularly-shaped patch of high clouds hovers off the coast of California, and moonlight caused the high clouds to cast distinct shadows on the marine layer clouds below. VIIRS acquired the image when the Moon was in its waxing gibbous phase, meaning it was more than half-lit, but less than full. Low clouds pose serious hazards for air and ship traffic, but satellites have had difficulty detecting them in the past. To illustrate this, the second image shows the same scene in thermal infrared, the band that meteorologists generally use to monitor clouds at night. Only high clouds are visible; the low clouds do not show up at all because they are roughly the same temperature as the ground. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Adam Voiland. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Tropical Depression 6 (Florence) in the Atlantic
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Microwave ImageVisible Light Image
These infrared, microwave, and visible images were created with data retrieved by the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite. Infrared Image Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red). Microwave Image AIRS data used to create the microwave images come from the microwave radiation emitted by Earth's atmosphere which is then received by the instrument. It shows where the heaviest rainfall is taking place (in blue) in the storm. Blue areas outside of the storm, where there are either some clouds or no clouds, indicate where the sea surface shines through. Vis/NIR Image The AIRS instrument suite contains a sensor that captures light in the visible/near-infrared portion of the electromagnetic spectrum. These 'visible' images are similar to a snapshot taken with your camera. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.Study on ice cloud optical thickness retrieval with MODIS IR spectral bands
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Jun
2005-01-01
The operational Moderate-Resolution Imaging Spectroradiometer (MODIS) products for cloud properties such as cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), cloud optical thickness (COT), and cloud phase (CP) have been available for users globally. An approach to retrieve COT is investigated using MODIS infrared (IR) window spectral bands (8.5 mm, 11mm, and 12 mm). The COT retrieval from MODIS IR bands has the potential to provide microphysical properties with high spatial resolution during night. The results are compared with those from operational MODIS products derived from the visible (VIS) and near-infrared (NIR) bands during day. Sensitivity of COT to MODIS spectral brightness temperature (BT) and BT difference (BTD) values is studied. A look-up table is created from the cloudy radiative transfer model accounting for the cloud absorption and scattering for the cloud microphysical property retrieval. The potential applications and limitations are also discussed. This algorithm can be applied to the future imager systems such as Visible/Infrared Imager/Radiometer Suite (VIIRS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and Advanced Baseline Imager (ABI) on the Geostationary Operational Environmental Satellite (GOES)-R.
Mean winds at the cloud top of Venus obtained from two-wavelength UV imaging by Akatsuki
NASA Astrophysics Data System (ADS)
Horinouchi, Takeshi; Kouyama, Toru; Lee, Yeon Joo; Murakami, Shin-ya; Ogohara, Kazunori; Takagi, Masahiro; Imamura, Takeshi; Nakajima, Kensuke; Peralta, Javier; Yamazaki, Atsushi; Yamada, Manabu; Watanabe, Shigeto
2018-01-01
Venus is covered with thick clouds. Ultraviolet (UV) images at 0.3-0.4 microns show detailed cloud features at the cloud-top level at about 70 km, which are created by an unknown UV-absorbing substance. Images acquired in this wavelength range have traditionally been used to measure winds at the cloud top. In this study, we report low-latitude winds obtained from the images taken by the UV imager, UVI, onboard the Akatsuki orbiter from December 2015 to March 2017. UVI provides images with two filters centered at 365 and 283 nm. While the 365-nm images enable continuation of traditional Venus observations, the 283-nm images visualize cloud features at an SO2 absorption band, which is novel. We used a sophisticated automated cloud-tracking method and thorough quality control to estimate winds with high precision. Horizontal winds obtained from the 283-nm images are generally similar to those from the 365-nm images, but in many cases, westward winds from the former are faster than the latter by a few m/s. From previous studies, one can argue that the 283-nm images likely reflect cloud features at higher altitude than the 365-nm images. If this is the case, the superrotation of the Venusian atmosphere generally increases with height at the cloud-top level, where it has been thought to roughly peak. The mean winds obtained from the 365-nm images exhibit local time dependence consistent with known tidal features. Mean zonal winds exhibit asymmetry with respect to the equator in the latter half of the analysis period, significantly at 365 nm and weakly at 283 nm. This contrast indicates that the relative altitude may vary with time and latitude, and so are the observed altitudes. In contrast, mean meridional winds do not exhibit much long-term variability. A previous study suggested that the geographic distribution of temporal mean zonal winds obtained from UV images from the Venus Express orbiter during 2006-2012 can be interpreted as forced by topographically induced stationary gravity waves. However, the geographic distribution of temporal mean zonal winds we obtained is not consistent with that distribution, which suggests that the distribution may not be persistent. [Figure not available: see fulltext.
Validation of Cloud Properties From Multiple Satellites Using CALIOP Data
NASA Technical Reports Server (NTRS)
Yost, Christopher R.; Minnis, Patrick; Bedka, Kristopher M.; Heck, Patrick W.; Palikonda, Rabindra; Sun-Mack, Sunny; Trepte, Qing
2016-01-01
The NASA Langley Satellite ClOud and Radiative Property retrieval System (SatCORPS) is routinely applied to multispectral imagery from several geostationary and polar-orbiting imagers to retrieve cloud properties for weather and climate applications. Validation of the retrievals with independent datasets is continuously ongoing in order to understand differences caused by calibration, spatial resolution, viewing geometry, and other factors. The CALIOP instrument provides a decade of detailed cloud observations which can be used to evaluate passive imager retrievals of cloud boundaries, thermodynamic phase, cloud optical depth, and water path on a global scale. This paper focuses on comparisons of CALIOP retrievals to retrievals from MODIS, VIIRS, AVHRR, GOES, SEVIRI, and MTSAT. CALIOP is particularly skilled at detecting weakly-scattering cirrus clouds with optical depths less than approx. 0.5. These clouds are often undetected by passive imagers and the effect this has on the property retrievals is discussed.
Venus Cloud Patterns (colorized and filtered)
NASA Technical Reports Server (NTRS)
1990-01-01
This picture of Venus was taken by the Galileo spacecrafts Solid State Imaging System on February 14, 1990, at a range of almost 1.7 million miles from the planet. A highpass spatial filter has been applied in order to emphasize the smaller scale cloud features, and the rendition has been colorized to a bluish hue in order to emphasize the subtle contrasts in the cloud markings and to indicate that it was taken through a violet filter. The sulfuric acid clouds indicate considerable convective activity, in the equatorial regions of the planet to the left and downwind of the subsolar point (afternoon on Venus). They are analogous to 'fair weather clouds' on Earth. The filamentary dark features visible in the colorized image are here revealed to be composed of several dark nodules, like beads on a string, each about 60 miles across. The Galileo Project is managed for NASA's Office of Space Science and Applications by the Jet Propulsion Laboratory; its mission is to study Jupiter and its satellites and magnetosphere after multiple gravity assist flybys at Venus and Earth. These images of the Venus clouds were taken by Galileo's Solid State Imaging System February 13, 1990, at a range of about 1 million miles. The smallest detail visible is about 20 miles. The two right images show Venus in violet light, the top one at a time six hours later than the bottom one. They show the state of the clouds near the top of Venus's cloud deck. A right to left motion of the cloud features is evident and is consistent with westward winds of about 230 mph. The two left images show Venus in near infrared light, at the same times as the two right images. Sunlight penetrates through the clouds more deeply at the near infrared wavelengths, allowing a view near the bottom of the cloud deck. The westward motion of the clouds is slower (about 150 mph) at the lower altitude. The clouds are composed of sulfuric acid droplets and occupy a range of altitudes from 30 to 45 miles. The images have been spatially filtered to bring out small scale details and de-emphasize global shading. The filtering has introduced artifacts (wiggly lines running north/south) that are faintly visible in the infrared image. The Galileo Project is managed for NASA's Office of Space Science and Applications by the Jet Propulsion Laboratory; its mission is to study Jupiter and its satellites and magnetosphere after multiple gravity assist flybys at Venus and Earth.
Comparison of CERES Cloud Properties Derived from Aqua and Terra MODIS Data and TRMM VIRS Radiances
NASA Astrophysics Data System (ADS)
Minnis, P.; Young, D. F.; Sun-Mack, S.; Trepte, Q. Z.; Chen, Y.; Heck, P. W.; Wielicki, B. A.
2003-12-01
The Clouds and Earth's Radiant Energy System (CERES) Project is obtaining Earth radiation budget measurements of unprecedented accuracy as a result of improved instruments and an analysis system that combines simultaneous, high-resolution cloud property retrievals with the broadband radiance data. The cloud properties are derived from three different satellite imagers: the Visible Infrared Scanner (VIRS) on the Tropical Rainfall Measuring Mission (TRMM) and the Moderate Resolution Imaging Spectroradiometers (MODIS) on the Aqua and Terra satellites. A single set of consistent algorithms using the 0.65, 1.6 or 2.1, 3.7, 10.8, and 12.0-æm channels are applied to all three imagers. The cloud properties include, cloud coverage, height, thickness, temperature, optical depth, phase, effective particle size, and liquid or ice water path. Because each satellite is in a different orbit, the results provide information on the diurnal cycle of cloud properties. Initial intercalibrations show excellent consistency between the three images except for some differences of ~ 1K between the 3.7-æm channel on Terra and those on VIRS and Aqua. The derived cloud properties are consistent with the known diurnal characteristics of clouds in different areas. These datasets should be valuable for exploring the role of clouds in the radiation budget and hydrological cycle.
Using Roving Cloud Observations from the S'COOL Project to Engage Citizen Scientists
NASA Astrophysics Data System (ADS)
Lewis, P. M.; Oostra, D.; Moore, S. W.; Rogerson, T. M.; Crecelius, S. A.; Chambers, L. H.
2011-12-01
Students' Clouds Observations On-Line (S'COOL) is a hands-on project, which supports NASA research on the Earth's climate. Through their observations, participants are engaged in identifying cloud-types and levels and sending that information to NASA. The two main groups of S'COOL observers are permanent locations such as regularly participating classrooms, and non-permanent locations or Rovers. These non-permanent locations can be a field trip, vacation, or just an occasional observation from a backyard. S'COOL welcomes participation from any interested observers, especially from places where official weather observations are few and far between. This program is offered to citizen scientists all over the world. They are participating in climate research by reporting cloud types and levels within +/- 15 minutes of a satellite overpass and sending that information back to NASA. When a participant's cloud observation coincides with a satellite overpass, the project sends them an email with a MODIS image of the overpass location, and a comparison of the satellite's cloud data results next to their ground-based report. This allows for the students and citizen scientists to participate in ground-truthing the CERES satellite data, to determine the level of agreement/disagreement. A new tool slated for future use in cloud identification, developed by the S'COOL team, is a mobile application. The application is entitled "Cloud Identification for Students" or "CITRUS". The mobile application utilizes a cloud dichotomous key with images to help with cloud identification. Also included in the application is a link to the project's cloud-reporting page to help with data submission in the field. One of the project's recent and most unique roving observers is a solo ocean rower who has traversed many of the world's ocean basins alone in a rowboat. While rowing across the oceans, she has recently been making cloud observations, which she sends back to us for analysis. In doing so, she is contributing difficult-to-collect ground-based data from points over the ocean, where there are typically no human inhabitants. As a result of the cloud reporting, we are able to better validate satellite data that give us a more complete picture of clouds in the atmosphere and their interactions with other parts of the integrated global Earth system. After making the cloud observations, students and citizen scientists are able to analyze the report they get back from NASA, improving their observation/data collection skills while keeping track of cloud patterns as they participate. Through the use of mobile technology, it will be possible to observe and immediately report the observation, allowing for a faster turn around on satellite reports and ground-truth data analysis. This paper will provide an analysis of the non-permanent observations made by the roving observers. These observations will give us an insight to their usefulness, as well as future steps for the program.
A study on strategic provisioning of cloud computing services.
Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah
2014-01-01
Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.
A Study on Strategic Provisioning of Cloud Computing Services
Rejaul Karim Chowdhury, Md
2014-01-01
Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243
NASA Astrophysics Data System (ADS)
Huang, Yishuo; Chiang, Chih-Hung; Hsu, Keng-Tsang
2018-03-01
Defects presented on the facades of a building do have profound impacts on extending the life cycle of the building. How to identify the defects is a crucial issue; destructive and non-destructive methods are usually employed to identify the defects presented on a building. Destructive methods always cause the permanent damages for the examined objects; on the other hand, non-destructive testing (NDT) methods have been widely applied to detect those defects presented on exterior layers of a building. However, NDT methods cannot provide efficient and reliable information for identifying the defects because of the huge examination areas. Infrared thermography is often applied to quantitative energy performance measurements for building envelopes. Defects on the exterior layer of buildings may be caused by several factors: ventilation losses, conduction losses, thermal bridging, defective services, moisture condensation, moisture ingress, and structure defects. Analyzing the collected thermal images can be quite difficult when the spatial variations of surface temperature are small. In this paper the authors employ image segmentation to cluster those pixels with similar surface temperatures such that the processed thermal images can be composed of limited groups. The surface temperature distribution in each segmented group is homogenous. In doing so, the regional boundaries of the segmented regions can be identified and extracted. A terrestrial laser scanner (TLS) is widely used to collect the point clouds of a building, and those point clouds are applied to reconstruct the 3D model of the building. A mapping model is constructed such that the segmented thermal images can be projected onto the 2D image of the specified 3D building. In this paper, the administrative building in Chaoyang University campus is used as an example. The experimental results not only provide the defect information but also offer their corresponding spatial locations in the 3D model.
Use of cloud computing in biomedicine.
Sobeslav, Vladimir; Maresova, Petra; Krejcar, Ondrej; Franca, Tanos C C; Kuca, Kamil
2016-12-01
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.
2017-12-08
Like a ship carving its way through the sea, the South Georgia and South Sandwich Islands parted the clouds. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite acquired this natural-color image on February 2, 2017. The ripples in the clouds are known as gravity waves. NASA image by Jeff Schmaltz, LANCE/EOSDIS Rapid Response #nasagoddard
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
NASA Astrophysics Data System (ADS)
Nugent, P. W.; Shaw, J. A.; Piazzolla, S.
2013-02-01
The continuous demand for high data return in deep space and near-Earth satellite missions has led NASA and international institutions to consider alternative technologies for high-data-rate communications. One solution is the establishment of wide-bandwidth Earth-space optical communication links, which require (among other things) a nearly obstruction-free atmospheric path. Considering the atmospheric channel, the most common and most apparent impairments on Earth-space optical communication paths arise from clouds. Therefore, the characterization of the statistical behavior of cloud coverage for optical communication ground station candidate sites is of vital importance. In this article, we describe the development and deployment of a ground-based, long-wavelength infrared cloud imaging system able to monitor and characterize the cloud coverage. This system is based on a commercially available camera with a 62-deg diagonal field of view. A novel internal-shutter-based calibration technique allows radiometric calibration of the camera, which operates without a thermoelectric cooler. This cloud imaging system provides continuous day-night cloud detection with constant sensitivity. The cloud imaging system also includes data-processing algorithms that calculate and remove atmospheric emission to isolate cloud signatures, and enable classification of clouds according to their optical attenuation. Measurements of long-wavelength infrared cloud radiance are used to retrieve the optical attenuation (cloud optical depth due to absorption and scattering) in the wavelength range of interest from visible to near-infrared, where the cloud attenuation is quite constant. This article addresses the specifics of the operation, calibration, and data processing of the imaging system that was deployed at the NASA/JPL Table Mountain Facility (TMF) in California. Data are reported from July 2008 to July 2010. These data describe seasonal variability in cloud cover at the TMF site, with cloud amount (percentage of cloudy pixels) peaking at just over 51 percent during February, of which more than 60 percent had optical attenuation exceeding 12 dB at wavelengths in the range from the visible to the near-infrared. The lowest cloud amount was found during August, averaging 19.6 percent, and these clouds were mostly optically thin, with low attenuation.
2017-12-08
A low ceiling of broken clouds offers opportunities for researches to sample clouds during part of the flight and clear air during other parts of the flight. --- The North Atlantic Aerosols and Marine Ecosystems Study (NAAMES) is a five year investigation to resolve key processes controlling ocean system function, their influences on atmospheric aerosols and clouds and their implications for climate. Michael Starobin joined the NAAMES field campaign on behalf of Earth Expeditions and NASA Goddard Space Flight Center’s Office of Communications. He presented stories about the important, multi-disciplinary research being conducted by the NAAMES team, with an eye towards future missions on the NASA drawing board. This is a NAAMES photo essay put together by Starobin, a collection of 49 photographs and captions. Photo and Caption Credit: Michael Starobin NASA image use policy NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
IRAS images of nearby dark clouds
NASA Technical Reports Server (NTRS)
Wood, Douglas O. S.; Myers, Philip C.; Daugherty, Debra A.
1994-01-01
We have investigated approximately 100 nearby molecular clouds using the extensive, all-sky database of IRAS. The clouds in this study cover a wide range of physical properties including visual extinction, size, mass, degree of isolation, homogeneity and morphology. IRAS 100 and 60 micron co-added images were used to calculate the 100 micron optical depth of dust in the clouds. These images of dust optical depth compare very well with (12)CO and (13)CO observations, and can be related to H2 column density. From the optical depth images we locate the edges of dark clouds and the dense cores inside them. We have identified a total of 43 `IRAS clouds' (regions with A(sub v) greater than 2) which contain a total of 255 `IRAS cores' (regions with A(sub v) greater than 4) and we catalog their physical properties. We find that the clouds are remarkably filamentary, and that the cores within the clouds are often distributed along the filaments. The largest cores are usually connected to other large cores by filaments. We have developed selection criteria to search the IRAS Point Source Catalog for stars that are likely to be associated with the clouds and we catalog the IRAS sources in each cloud or core. Optically visible stars associated with the clouds have been identified from the Herbig and Bell catalog. From these data we characterize the physical properties of the clouds including their star-formation efficiency.
Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data
NASA Astrophysics Data System (ADS)
Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie
2011-07-01
Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.
The State of Cloud-Based Biospecimen and Biobank Data Management Tools.
Paul, Shonali; Gade, Aditi; Mallipeddi, Sumani
2017-04-01
Biobanks are critical for collecting and managing high-quality biospecimens from donors with appropriate clinical annotation. The high-quality human biospecimens and associated data are required to better understand disease processes. Therefore, biobanks have become an important and essential resource for healthcare research and drug discovery. However, collecting and managing huge volumes of data (biospecimens and associated clinical data) necessitate that biobanks use appropriate data management solutions that can keep pace with the ever-changing requirements of research. To automate biobank data management, biobanks have been investing in traditional Laboratory Information Management Systems (LIMS). However, there are a myriad of challenges faced by biobanks in acquiring traditional LIMS. Traditional LIMS are cost-intensive and often lack the flexibility to accommodate changes in data sources and workflows. Cloud technology is emerging as an alternative that provides the opportunity to small and medium-sized biobanks to automate their operations in a cost-effective manner, even without IT personnel. Cloud-based solutions offer the advantage of heightened security, rapid scalability, dynamic allocation of services, and can facilitate collaboration between different research groups by using a shared environment on a "pay-as-you-go" basis. The benefits offered by cloud technology have resulted in the development of cloud-based data management solutions as an alternative to traditional on-premise software. After evaluating the advantages offered by cloud technology, several biobanks have started adopting cloud-based tools. Cloud-based tools provide biobanks with easy access to biospecimen data for real-time sharing with clinicians. Another major benefit realized by biobanks by implementing cloud-based applications is unlimited data storage on the cloud and automatic backups for protecting any data loss in the face of natural calamities.
NASA Astrophysics Data System (ADS)
Hueso, Ricardo; Garate-Lopez, I.; Peralta, J.; Bandos, T.; Sánchez-Lavega, A.
2013-10-01
After more than 6 years orbiting Venus the Venus Express mission has provided the largest database of observations of Venus atmosphere at different cloud layers with the combination of VMC and VIRTIS instruments. We present measurements of cloud motions in the South hemisphere of Venus analyzing images from the VIRTIS-M visible channel at different wavelengths sensitive to the upper cloud haze at 65-70 km height (dayside ultraviolet images) and the middle cloud deck (dayside visible and near infrared images around 1 μm) about 5-8 km deeper in the atmosphere. We combine VIRTIS images in nearby wavelengths to increase the contrast of atmospheric details and measurements were obtained with a semi-automatic cloud correlation algorithm. Both cloud layers are studied simultaneously to infer similarities and differences in these vertical levels in terms of cloud morphologies and winds. For both levels we present global mean zonal and meridional winds, latitudinal distribution of winds with local time and the wind shear between both altitudes. The upper branch of the Hadley cell circulation is well resolved in UV images with an acceleration of the meridional circulation at mid-latitudes with increasing local time peaking at 14-16h. This organized meridional circulation is almost absent in NIR images. Long-term variability of zonal winds is also found in UV images with increasing winds over time during the VEX mission. This is in agreement with current analysis of VMC images (Kathuntsev et al. 2013). The possible long-term acceleration of zonal winds is also examined for NIR images. References Khatuntsev et al. Icarus 226, 140-158 (2013)
A hybrid approach to estimate the complex motions of clouds in sky images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Zhenzhou; Yu, Dantong; Huang, Dong
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
A hybrid approach to estimate the complex motions of clouds in sky images
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2016-09-14
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
CloudSat Profiles Tropical Storm Andrea
2007-05-10
CloudSat's Cloud Profiling Radar captured a profile across Tropical Storm Andrea on Wednesday, May 9, 2007, near the South Carolina/Georgia/Florida Atlantic coast. The upper image shows an infrared view of Tropical Storm Andrea from the Moderate Resolution Imaging Spectroradiometer instrument on NASA's Aqua satellite, with CloudSat's ground track shown as a red line. The lower image is the vertical cross section of radar reflectivity along this path, where the colors indicate the intensity of the reflected radar energy. CloudSat orbits approximately one minute behind Aqua in a satellite formation known as the A-Train. http://photojournal.jpl.nasa.gov/catalog/PIA09379
Services for domain specific developments in the Cloud
NASA Astrophysics Data System (ADS)
Schwichtenberg, Horst; Gemuend, André
2015-04-01
We will discuss and demonstrate the possibilities of new Cloud Services where the complete development of code is in the Cloud. We will discuss the possibilities of such services where the complete development cycle from programing to testing is in the cloud. This can be also combined with dedicated research domain specific services and hide the burden of accessing available infrastructures. As an example, we will show a service that is intended to complement the services of the VERCE projects infrastructure, a service that utilizes Cloud resources to offer simplified execution of data pre- and post-processing scripts. It offers users access to the ObsPy seismological toolbox for processing data with the Python programming language, executed on virtual Cloud resources in a secured sandbox. The solution encompasses a frontend with a modern graphical user interface, a messaging infrastructure as well as Python worker nodes for background processing. All components are deployable in the Cloud and have been tested on different environments based on OpenStack and OpenNebula. Deployments on commercial, public Clouds will be tested in the future.
NASA Astrophysics Data System (ADS)
Wang, Xi Vincent; Wang, Lihui
2017-08-01
Cloud computing is the new enabling technology that offers centralised computing, flexible data storage and scalable services. In the manufacturing context, it is possible to utilise the Cloud technology to integrate and provide industrial resources and capabilities in terms of Cloud services. In this paper, a function block-based integration mechanism is developed to connect various types of production resources. A Cloud-based architecture is also deployed to offer a service pool which maintains these resources as production services. The proposed system provides a flexible and integrated information environment for the Cloud-based production system. As a specific type of manufacturing, Waste Electrical and Electronic Equipment (WEEE) remanufacturing experiences difficulties in system integration, information exchange and resource management. In this research, WEEE is selected as the example of Internet of Things to demonstrate how the obstacles and bottlenecks are overcome with the help of Cloud-based informatics approach. In the case studies, the WEEE recycle/recovery capabilities are also integrated and deployed as flexible Cloud services. Supporting mechanisms and technologies are presented and evaluated towards the end of the paper.
Tropical Depression Debbie in the Atlantic
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Microwave ImageVisible Light Image
Infrared Image These images show Tropical Depression Debbie in the Atlantic, from the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite on August 22, 2006. This AIRS image shows the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. The infrared signal does not penetrate through clouds. Where there are no clouds the AIRS instrument reads the infrared signal from the surface of the Earth, revealing warmer temperatures (red). At the time the data were taken from which these images were made the eye had not yet opened but the storm is now well organized. The location of the future eye appears as a circle at 275 K brightness temperature in the microwave image just to the SE of the Azores. Microwave Image The microwave image is created from microwave radiation emitted by Earth's atmosphere and received by the instrument. It shows where the heaviest rainfall is taking place (in blue) in the storm. Blue areas outside of the storm where there are either some clouds or no clouds, indicate where the sea surface shines through. Vis/NIR Image Tropical Depression Debbie captured by the visible light/near-infrared sensor on the AIRS instrument. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.2016-12-21
This comparison of two views from NASA's Cassini spacecraft, taken fairly close together in time, illustrates a peculiar mystery: Why would clouds on Saturn's moon Titan be visible in some images, but not in others? In the top view, a near-infrared image from Cassini's imaging cameras, the skies above Saturn's moon Titan look relatively cloud free. But in the bottom view, at longer infrared wavelengths, Cassini sees a large field of bright clouds. Even though these views were taken at different wavelengths, researchers would expect at least a hint of the clouds to show up in the upper image. Thus they have been trying to understand what's behind the difference. As northern summer approaches on Titan, atmospheric models have predicted that clouds will become more common at high northern latitudes, similar to what was observed at high southern latitudes during Titan's late southern summer in 2004. Cassini's Imaging Science Subsystem (ISS) and Visual and Infrared Mapping Spectrometer (VIMS) teams have been observing Titan to document changes in weather patterns as the seasons change, and there is particular interest in following the onset of clouds in the north polar region where Titan's lakes and seas are concentrated. Cassini's "T120" and "T121" flybys of Titan, on June 7 and July 25, 2016, respectively, provided views of high northern latitudes over extended time periods -- more than 24 hours during both flybys. Intriguingly, the ISS and VIMS observations appear strikingly different from each other. In the ISS observations (monochrome image at top), surface features are easily identifiable and only a few small, isolated clouds were detected. In contrast, the VIMS observations (color image at bottom) suggest widespread cloud cover during both flybys. The observations were made over the same time period, so differences in illumination geometry or changes in the clouds themselves are unlikely to be the cause for the apparent discrepancy: VIMS shows persistent atmospheric features over the entire observation period and ISS consistently detects surface features with just a few localized clouds. The answer to what could be causing the discrepancy appears to lie with Titan's hazy atmosphere, which is much easier to see through at the longer infrared wavelengths that VIMS is sensitive to (up to 5 microns) than at the shorter, near-infrared wavelength used by ISS to image Titan's surface and lower atmosphere (0.94 microns). High, thin cirrus clouds that are optically thicker than the atmospheric haze at longer wavelengths, but optically thinner than the haze at the shorter wavelength of the ISS observations, could be detected by VIMS and simultaneously lost in the haze to ISS -- similar to trying to see a thin cloud layer on a hazy day on Earth. This phenomenon has not been seen again since July 2016, but Cassini has several more opportunities to observe Titan over the last months of the mission in 2017, and scientists will be watching to see if and how the weather changes. These two images were taken as part of the T120 flyby on June 7 (VIMS) and 8 (ISS), 2016. The distance to Titan was about 28,000 miles (45,000 kilometers) for the VIMS image and about 398,000 miles (640,000 kilometers) for the ISS image. The VIMS image has been processed to enhance the visibility of the clouds; in this false-color view, clouds appear nearly white, atmospheric haze is pink, and surface areas would appear green. http://photojournal.jpl.nasa.gov/catalog/PIA21054
Platform for High-Assurance Cloud Computing
2016-06-01
to create today’s standard cloud computing applications and services. Additionally , our SuperCloud (a related but distinct project under the same... Additionally , our SuperCloud (a related but distinct project under the same MRC funding) reduces vendor lock-in and permits application to migrate, to follow...managing key- value storage with strong assurance properties. This first accomplishment allows us to climb the cloud technical stack, by offering
Influence of Ice Cloud Microphysics on Imager-Based Estimates of Earth's Radiation Budget
NASA Astrophysics Data System (ADS)
Loeb, N. G.; Kato, S.; Minnis, P.; Yang, P.; Sun-Mack, S.; Rose, F. G.; Hong, G.; Ham, S. H.
2016-12-01
A central objective of the Clouds and the Earth's Radiant Energy System (CERES) is to produce a long-term global climate data record of Earth's radiation budget from the TOA down to the surface along with the associated atmospheric and surface properties that influence it. CERES relies on a number of data sources, including broadband radiometers measuring incoming and reflected solar radiation and OLR, high-resolution spectral imagers, meteorological, aerosol and ozone assimilation data, and snow/sea-ice maps based on microwave radiometer data. While the TOA radiation budget is largely determined directly from accurate broadband radiometer measurements, the surface radiation budget is derived indirectly through radiative transfer model calculations initialized using imager-based cloud and aerosol retrievals and meteorological assimilation data. Because ice cloud particles exhibit a wide range of shapes, sizes and habits that cannot be independently retrieved a priori from passive visible/infrared imager measurements, assumptions about the scattering properties of ice clouds are necessary in order to retrieve ice cloud optical properties (e.g., optical depth) from imager radiances and to compute broadband radiative fluxes. This presentation will examine how the choice of an ice cloud particle model impacts computed shortwave (SW) radiative fluxes at the top-of-atmosphere (TOA) and surface. The ice cloud particle models considered correspond to those from prior, current and future CERES data product versions. During the CERES Edition2 (and Edition3) processing, ice cloud particles were assumed to be smooth hexagonal columns. In the Edition4, roughened hexagonal columns are assumed. The CERES team is now working on implementing in a future version an ice cloud particle model comprised of a two-habit ice cloud model consisting of roughened hexagonal columns and aggregates of roughened columnar elements. In each case, we use the same ice particle model in both the imager-based cloud retrievals (inverse problem) and the computed radiative fluxes (forward calculation). In addition to comparing radiative fluxes using the different ice cloud particle models, we also compare instantaneous TOA flux calculations with those observed by the CERES instrument.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
False Color Mosaic Great Red Spot
NASA Technical Reports Server (NTRS)
1996-01-01
False color representation of Jupiter's Great Red Spot (GRS) taken through three different near-infrared filters of the Galileo imaging system and processed to reveal cloud top height. Images taken through Galileo's near-infrared filters record sunlight beyond the visible range that penetrates to different depths in Jupiter's atmosphere before being reflected by clouds. The Great Red Spot appears pink and the surrounding region blue because of the particular color coding used in this representation. Light reflected by Jupiter at a wavelength (886 nm) where methane strongly absorbs is shown in red. Due to this absorption, only high clouds can reflect sunlight in this wavelength. Reflected light at a wavelength (732 nm) where methane absorbs less strongly is shown in green. Lower clouds can reflect sunlight in this wavelength. Reflected light at a wavelength (757 nm) where there are essentially no absorbers in the Jovian atmosphere is shown in blue: This light is reflected from the deepest clouds. Thus, the color of a cloud in this image indicates its height. Blue or black areas are deep clouds; pink areas are high, thin hazes; white areas are high, thick clouds. This image shows the Great Red Spot to be relatively high, as are some smaller clouds to the northeast and northwest that are surprisingly like towering thunderstorms found on Earth. The deepest clouds are in the collar surrounding the Great Red Spot, and also just to the northwest of the high (bright) cloud in the northwest corner of the image. Preliminary modeling shows these cloud heights vary over 30 km in altitude. This mosaic, of eighteen images (6 in each filter) taken over a 6 minute interval during the second GRS observing sequence on June 26, 1996, has been map-projected to a uniform grid of latitude and longitude. North is at the top.
Launched in October 1989, Galileo entered orbit around Jupiter on December 7, 1995. The spacecraft's mission is to conduct detailed studies of the giant planet, its largest moons and the Jovian magnetic environment. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoA Multi-Year Data Set of Cloud Properties Derived for CERES from Aqua, Terra, and TRMM
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Sunny Sun-Mack; Trepte, Quinz Z.; Yan Chen; Brown, Richard R.; Gibson, Sharon C.; Heck, Michael L.; Dong, Xiquan; Xi, Baike
2007-01-01
The Clouds and Earth's Radiant Energy System (CERES) Project is producing a suite of cloud properties from high-resolution imagers on several satellites and matching them precisely with broadband radiance data to study the influence of clouds and radiation on climate. The cloud properties generally compare well with independent validation sources. Distinct differences are found between the CERES cloud properties and those derived with other algorithms from the same imager data. CERES products will be updated beginning in late 2006.
A holistic image segmentation framework for cloud detection and extraction
NASA Astrophysics Data System (ADS)
Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe
2013-05-01
Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.
Investigating the Use of Cloudbursts for High-Throughput Medical Image Registration
Kim, Hyunjoo; Parashar, Manish; Foran, David J.; Yang, Lin
2010-01-01
This paper investigates the use of clouds and autonomic cloudbursting to support a medical image registration. The goal is to enable a virtual computational cloud that integrates local computational environments and public cloud services on-the-fly, and support image registration requests from different distributed researcher groups with varied computational requirements and QoS constraints. The virtual cloud essentially implements shared and coordinated task-spaces, which coordinates the scheduling of jobs submitted by a dynamic set of research groups to their local job queues. A policy-driven scheduling agent uses the QoS constraints along with performance history and the state of the resources to determine the appropriate size and mix of the public and private cloud resource that should be allocated to a specific request. The virtual computational cloud and the medical image registration service have been developed using the CometCloud engine and have been deployed on a combination of private clouds at Rutgers University and the Cancer Institute of New Jersey and Amazon EC2. An experimental evaluation is presented and demonstrates the effectiveness of autonomic cloudbursts and policy-based autonomic scheduling for this application. PMID:20640235
NASA Astrophysics Data System (ADS)
Christensen, Matthew W.; Neubauer, David; Poulsen, Caroline A.; Thomas, Gareth E.; McGarragh, Gregory R.; Povey, Adam C.; Proud, Simon R.; Grainger, Roy G.
2017-11-01
Increased concentrations of aerosol can enhance the albedo of warm low-level cloud. Accurately quantifying this relationship from space is challenging due in part to contamination of aerosol statistics near clouds. Aerosol retrievals near clouds can be influenced by stray cloud particles in areas assumed to be cloud-free, particle swelling by humidification, shadows and enhanced scattering into the aerosol field from (3-D radiative transfer) clouds. To screen for this contamination we have developed a new cloud-aerosol pairing algorithm (CAPA) to link cloud observations to the nearest aerosol retrieval within the satellite image. The distance between each aerosol retrieval and nearest cloud is also computed in CAPA. Results from two independent satellite imagers, the Advanced Along-Track Scanning Radiometer (AATSR) and Moderate Resolution Imaging Spectroradiometer (MODIS), show a marked reduction in the strength of the intrinsic aerosol indirect radiative forcing when selecting aerosol pairs that are located farther away from the clouds (-0.28±0.26 W m-2) compared to those including pairs that are within 15 km of the nearest cloud (-0.49±0.18 W m-2). The larger aerosol optical depths in closer proximity to cloud artificially enhance the relationship between aerosol-loading, cloud albedo, and cloud fraction. These results suggest that previous satellite-based radiative forcing estimates represented in key climate reports may be exaggerated due to the inclusion of retrieval artefacts in the aerosol located near clouds.
Automated Visibility & Cloud Cover Measurements with a Solid State Imaging System
1989-03-01
GL-TR-89-0061 SIO Ref. 89-7 MPL-U-26/89 AUTOMATED VISIBILITY & CLOUD COVER MEASUREMENTS WITH A SOLID-STATE IMAGING SYSTEM C) to N4 R. W. Johnson W. S...include Security Classification) Automated Visibility & Cloud Measurements With A Solid State Imaging System 12. PERSONAL AUTHOR(S) Richard W. Johnson...based imaging systems , their ics and control algorithms, thus they ar.L discussed sepa- initial deployment and the preliminary application of rately
NASA Technical Reports Server (NTRS)
Wen, Guo-Yong; Marshak, Alexander; Cahalan, Robert F.
2004-01-01
Aerosol amount in clear regions of a cloudy atmosphere is a critical parameter in studying the interaction between aerosols and clouds. Since the global cloud cover is about 50%, cloudy scenes are often encountered in any satellite images. Aerosols are more or less transparent, while clouds are extremely reflective in the visible spectrum of solar radiation. The radiative transfer in clear-cloudy condition is highly three- dimensional (3D). This paper focuses on estimating the 3D effects on aerosol optical thickness retrievals using Monte Carlo simulations. An ASTER image of cumulus cloud fields in the biomass burning region in Brazil is simulated in this study. The MODIS products (i-e., cloud optical thickness, particle effective radius, cloud top pressure, surface reflectance, etc.) are used to construct the cloud property and surface reflectance fields. To estimate the cloud 3-D effects, we assume a plane-parallel stratification of aerosol properties in the 60 km x 60 km ASTER image. The simulated solar radiation at the top of the atmosphere is compared with plane-parallel calculations. Furthermore, the 3D cloud radiative effects on aerosol optical thickness retrieval are estimated.
Hamraz, Hamid; Contreras, Marco A; Zhang, Jun
2017-07-28
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
Detection and Retrieval of Multi-Layered Cloud Properties Using Satellite Data
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jian-Ping; Nguyen, Louis; Khaiyer, Mandana M.
2005-01-01
Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.
Detection and retrieval of multi-layered cloud properties using satellite data
NASA Astrophysics Data System (ADS)
Minnis, Patrick; Sun-Mack, Sunny; Chen, Yan; Yi, Helen; Huang, Jianping; Nguyen, Louis; Khaiyer, Mandana M.
2005-10-01
Four techniques for detecting multilayered clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. A new technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other methods examined here use atmospheric sounding data (CO2-slicing, CO2), BTD, or microwave data. The CO2 and BTD methods are limited to optically thin cirrus over low clouds, while the MWR methods are limited to ocean areas only. This paper explores the use of the BTD and CO2 methods as applied to Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer EOS (AMSR-E) data taken from the Aqua satellite over ocean surfaces. Cloud properties derived from MODIS data for the Clouds and the Earth's Radiant Energy System (CERES) Project are used to classify cloud phase and optical properties. The preliminary results focus on a MODIS image taken off the Uruguayan coast. The combined MW visible infrared (MVI) method is assumed to be the reference for detecting multilayered ice-over-water clouds. The BTD and CO2 techniques accurately match the MVI classifications in only 51 and 41% of the cases, respectively. Much additional study is need to determine the uncertainties in the MVI method and to analyze many more overlapped cloud scenes.
Enhancement of the MODIS Snow and Ice Product Suite Utilizing Image Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Hall, Dorothy K.; Riggs, George A.
2006-01-01
A problem has been noticed with the current NODIS Snow and Ice Product in that fringes of certain snow fields are labeled as "cloud" whereas close inspection of the data indicates that the correct labeling is a non-cloud category such as snow or land. This occurs because the current MODIS Snow and Ice Product generation algorithm relies solely on the MODIS Cloud Mask Product for the labeling of image pixels as cloud. It is proposed here that information obtained from image segmentation can be used to determine when it is appropriate to override the cloud indication from the cloud mask product. Initial tests show that this approach can significantly reduce the cloud "fringing" in modified snow cover labeling. More comprehensive testing is required to determine whether or not this approach consistently improves the accuracy of the snow and ice product.
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the northeast, from between the cloud layers and above the streaks in the lower cloud leading towards the hotspot. The upper haze layer has some features that match the lower cloud, such as the bright streak in the foreground of the frame. These are probably thick clouds that span several tens of vertical kilometers.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the northeast, from between the cloud layers and above the streaks in the lower cloud leading towards the hotspot. The hotspot is clearly visible as a deep blue feature. The cloud streaks end near the hotspot, consistent with the idea that clouds traveling along these streak lines descend and evaporate as they approach the hotspot. The upper haze layer is slightly bowed upwards above the hotspot.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.Automatic cloud coverage assessment of Formosat-2 image
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2011-11-01
Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.
A Jovian Hotspot in True and False Colors (Time set 3)
NASA Technical Reports Server (NTRS)
1997-01-01
True and false color views of an equatorial 'hotspot' on Jupiter. These images cover an area 34,000 kilometers by 11,000 kilometers. The top mosaic combines the violet (410 nanometers or nm) and near-infrared continuum (756 nm) filter images to create an image similar to how Jupiter would appear to human eyes. Differences in coloration are due to the composition and abundances of trace chemicals in Jupiter's atmosphere. The bottom mosaic uses Galileo's three near-infrared wavelengths (756 nm, 727 nm, and 889 nm displayed in red, green, and blue) to show variations in cloud height and thickness. Bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the deep cloud with an overlying thin haze. The light blue region to the left is covered by a very high haze layer. The multicolored region to the right has overlapping cloud layers of different heights. Galileo is the first spacecraft to distinguish cloud layers on Jupiter.
North is at the top. The mosaics cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees West. The planetary limb runs along the right edge of the image. Cloud patterns appear foreshortened as they approach the limb. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers by the Solid State Imaging system aboard NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoNASA Technical Reports Server (NTRS)
2002-01-01
(Released 23 April 2002) The Science This image, centered near 49.7 N and 43.0 W (317.0 E), displays splotchy water ice clouds that obscure the surface. Most of Mars was in a relatively clear period when this image was acquired, which is why many of the other THEMIS images acquired during the same period do not have obvious signs of atmospheric dust or water ice clouds. This image is far enough north to catch the edge of the north polar hood that develops during the northern winter. This is a cap of water ice and CO2 ice clouds that form over the Martian north pole. Mars has a number of interesting atmospheric phenomena which THEMIS will be able to view in addition to water ice clouds, including dust devils, dust storms, and tracking atmospheric temperatures with the infrared camera. The Story Anyone who's been on an airplane in a storm knows how clouds on Earth can block the view below. The thin water ice clouds on Mars might make things slightly blurry, but at least we can still see the surface. While the surface features may not be as clear in this image, it's actually kind of fascinating to see clouds at work, because we can get a sense of how the north pole on Mars influences the weather and the climate. In this image, the north pole is responsible for the presence of the clouds. Made of water ice and carbon dioxide, these clouds 'mist out' in a atmospheric 'hood' that caps the surface during the northern Martian winter, hiding it from full view of eager observers here on Earth.
NASA Technical Reports Server (NTRS)
Hasler, A. F.
1981-01-01
Observations of cloud geometry using scan-synchronized stereo geostationary satellites having images with horizontal spatial resolution of approximately 0.5 km, and temporal resolution of up to 3 min are presented. The stereo does not require a cloud with known emissivity to be in equilibrium with an atmosphere with a known vertical temperature profile. It is shown that absolute accuracies of about 0.5 km are possible. Qualitative and quantitative representations of atmospheric dynamics were shown by remapping, display, and stereo image analysis on an interactive computer/imaging system. Applications of stereo observations include: (1) cloud top height contours of severe thunderstorms and hurricanes, (2) cloud top and base height estimates for cloud-wind height assignment, (3) cloud growth measurements for severe thunderstorm over-shooting towers, (4) atmospheric temperature from stereo heights and infrared cloud top temperatures, and (5) cloud emissivity estimation. Recommendations are given for future improvements in stereo observations, including a third GOES satellite, operational scan synchronization of all GOES satellites and better resolution sensors.
3D reconstruction optimization using imagery captured by unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bassie, Abby L.; Meacham, Sean; Young, David; Turnage, Gray; Moorhead, Robert J.
2017-05-01
Because unmanned air vehicles (UAVs) are emerging as an indispensable image acquisition platform in precision agriculture, it is vitally important that researchers understand how to optimize UAV camera payloads for analysis of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes ranging from 45.72 m (150 ft.) to 121.92 m (400 ft.). After collecting imagery, two different software packages (MeshLab and AgiSoft) were used to measure predetermined reference objects within six three-dimensional (3-D) point clouds (one per altitude scenario). In-silico measurements were then compared to actual reference object measurements, as recorded with a tape measure. Deviations of in-silico measurements from actual measurements were recorded as Δx, Δy, and Δz. The average measurement deviation in each coordinate direction was then calculated for each of the six flight scenarios. Results from MeshLab vs. AgiSoft offered insight into the effectiveness of GPS-defined point cloud scaling in comparison to user-defined point cloud scaling. In three of the six flight scenarios flown, MeshLab's 3D imaging software (user-defined scale) was able to measure object dimensions from 50.8 to 76.2 cm (20-30 inches) with greater than 93% accuracy. The largest average deviation in any flight scenario from actual measurements was 14.77 cm (5.82 in.). Analysis of the point clouds in AgiSoft (GPS-defined scale) yielded even smaller Δx, Δy, and Δz than the MeshLab measurements in over 75% of the flight scenarios. The precisions of these results are satisfactory in a wide variety of precision agriculture applications focused on differentiating and identifying objects using remote imagery.
GPI Spectroscopy of the Mass, Age, and Metallicity Benchmark Brown Dwarf HD 4747 B
NASA Astrophysics Data System (ADS)
Crepp, Justin R.; Principe, David A.; Wolff, Schuyler; Giorla Godfrey, Paige A.; Rice, Emily L.; Cieza, Lucas; Pueyo, Laurent; Bechter, Eric B.; Gonzales, Erica J.
2018-02-01
The physical properties of brown dwarf companions found to orbit nearby, solar-type stars can be benchmarked against independent measures of their mass, age, chemical composition, and other parameters, offering insights into the evolution of substellar objects. The TRENDS high-contrast imaging survey has recently discovered a (mass/age/metallicity) benchmark brown dwarf orbiting the nearby (d = 18.69 ± 0.19 pc), G8V/K0V star HD 4747. We have acquired follow-up spectroscopic measurements of HD 4747 B using the Gemini Planet Imager to study its spectral type, effective temperature, surface gravity, and cloud properties. Observations obtained in the H-band and K 1-band recover the companion and reveal that it is near the L/T transition (T1 ± 2). Fitting atmospheric models to the companion spectrum, we find strong evidence for the presence of clouds. However, spectral models cannot satisfactorily fit the complete data set: while the shape of the spectrum can be well-matched in individual filters, a joint fit across the full passband results in discrepancies that are a consequence of the inherent color of the brown dwarf. We also find a 2σ tension in the companion mass, age, and surface gravity when comparing to evolutionary models. These results highlight the importance of using benchmark objects to study “secondary effects” such as metallicity, non-equilibrium chemistry, cloud parameters, electron conduction, non-adiabatic cooling, and other subtleties affecting emergent spectra. As a new L/T transition benchmark, HD 4747 B warrants further investigation into the modeling of cloud physics using higher resolution spectroscopy across a broader range of wavelengths, polarimetric observations, and continued Doppler radial velocity and astrometric monitoring.
A cloud-based multimodality case file for mobile devices.
Balkman, Jason D; Loehfelm, Thomas W
2014-01-01
Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.
GIFT-Cloud: A data sharing and collaboration platform for medical imaging research.
Doel, Tom; Shakir, Dzhoshkun I; Pratt, Rosalind; Aertsen, Michael; Moggridge, James; Bellon, Erwin; David, Anna L; Deprest, Jan; Vercauteren, Tom; Ourselin, Sébastien
2017-02-01
Clinical imaging data are essential for developing research software for computer-aided diagnosis, treatment planning and image-guided surgery, yet existing systems are poorly suited for data sharing between healthcare and academia: research systems rarely provide an integrated approach for data exchange with clinicians; hospital systems are focused towards clinical patient care with limited access for external researchers; and safe haven environments are not well suited to algorithm development. We have established GIFT-Cloud, a data and medical image sharing platform, to meet the needs of GIFT-Surg, an international research collaboration that is developing novel imaging methods for fetal surgery. GIFT-Cloud also has general applicability to other areas of imaging research. GIFT-Cloud builds upon well-established cross-platform technologies. The Server provides secure anonymised data storage, direct web-based data access and a REST API for integrating external software. The Uploader provides automated on-site anonymisation, encryption and data upload. Gateways provide a seamless process for uploading medical data from clinical systems to the research server. GIFT-Cloud has been implemented in a multi-centre study for fetal medicine research. We present a case study of placental segmentation for pre-operative surgical planning, showing how GIFT-Cloud underpins the research and integrates with the clinical workflow. GIFT-Cloud simplifies the transfer of imaging data from clinical to research institutions, facilitating the development and validation of medical research software and the sharing of results back to the clinical partners. GIFT-Cloud supports collaboration between multiple healthcare and research institutions while satisfying the demands of patient confidentiality, data security and data ownership. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Islam, Muhammad Faysal
2013-01-01
Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…
Cloud Infrastructure & Applications - CloudIA
NASA Astrophysics Data System (ADS)
Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank
The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.
Cloud Streets over the Bering Sea
2017-12-08
NASA image captured January 4, 2012 Most of us prefer our winter roads free of ice, but one kind of road depends on it: a cloud street. Such streets formed over the Bering Sea in early January 2012, thanks to snow and ice blanketing the nearby land, and sea ice clinging to the shore. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite captured this natural-color image of the cloud streets on January 4, 2012. Air blowing over frigid ice then warmer ocean water can lead to the development of parallel cylinders of spinning air. Above the upward cycle of these cylinders (rising air), small clouds form. Along the downward cycle (descending air), skies are clear. The resulting cloud formations resemble streets. This image shows that some of the cloud streets begin over the sea ice, but most of the clouds hover over the open ocean water. These streets are not perfectly straight, but curve to the east and west after passing over the sea ice. By lining up along the prevailing wind direction, the tiny clouds comprising the streets indicate the wind patterns around the time of their formation. NASA images courtesy LANCE/EOSDIS MODIS Rapid Response Team at NASA GSFC. Caption by Michon Scott. Instrument: Terra - MODIS Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes
NASA Astrophysics Data System (ADS)
Ozcan, O.
2016-12-01
Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.
Jupiter's Northern Hemisphere in False Color (Time Set 3)
NASA Technical Reports Server (NTRS)
1997-01-01
Mosaic of Jupiter's northern hemisphere between 10 and 50 degrees latitude. Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the color and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including large white ovals, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two interacting vortices in the upper half of the mosaic is about 3500 kilometers.
This mosaic uses the Galileo imaging camera's three near-infrared wavelengths (756 nanometers, 727 nanometers, and 889 nanometers displayed in red, green, and blue) to show variations in cloud height and thickness. Light blue clouds are high and thin, reddish clouds are deep, and white clouds are high and thick. The clouds and haze over the ovals are high, extending into Jupiter's stratosphere. Dark purple most likely represents a high haze overlying a clear deep atmosphere. Galileo is the first spacecraft to distinguish cloud layers on Jupiter.North is at the top. The images are projected on a sphere, with features being foreshortened towards the north. The planetary limb runs along the right edge of the mosaic. Cloud patterns appear foreshortened as they approach the limb. The smallest resolved features are tens of kilometers in size. These images were taken on April 3, 1997, at a range of 1.4 million kilometers by the Solid State Imaging system (CCD) on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoNASA Astrophysics Data System (ADS)
Xu, Feng; van Harten, Gerard; Diner, David J.; Davis, Anthony B.; Seidel, Felix C.; Rheingans, Brian; Tosca, Mika; Alexandrov, Mikhail D.; Cairns, Brian; Ferrare, Richard A.; Burton, Sharon P.; Fenn, Marta A.; Hostetler, Chris A.; Wood, Robert; Redemann, Jens
2018-03-01
An optimization algorithm is developed to retrieve liquid water cloud properties including cloud optical depth (COD), droplet size distribution and cloud top height (CTH), and above-cloud aerosol properties including aerosol optical depth (AOD), single-scattering albedo, and microphysical properties from sweep-mode observations by Jet Propulsion Laboratory's Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) instrument. The retrieval is composed of three major steps: (1) initial estimate of the mean droplet size distribution across the entire image of 80-100 km along track by 10-25 km across track from polarimetric cloudbow observations, (2) coupled retrieval of image-scale cloud and above-cloud aerosol properties by fitting the polarimetric data at all observation angles, and (3) iterative retrieval of 1-D radiative transfer-based COD and droplet size distribution at pixel scale (25 m) by establishing relationships between COD and droplet size and fitting the total radiance measurements. Our retrieval is tested using 134 AirMSPI data sets acquired during the National Aeronautics and Space Administration (NASA) field campaign ObseRvations of Aerosols above CLouds and their intEractionS. The retrieved above-cloud AOD and CTH are compared to coincident HSRL-2 (HSRL-2, NASA Langley Research Center) data, and COD and droplet size distribution parameters (effective radius reff and effective variance veff) are compared to coincident Research Scanning Polarimeter (RSP) (NASA Goddard Institute for Space Studies) data. Mean absolute differences between AirMSPI and HSRL-2 retrievals of above-cloud AOD at 532 nm and CTH are 0.03 and <0.5 km, respectively. At RSP's footprint scale ( 323 m), mean absolute differences between RSP and AirMSPI retrievals of COD, reff, and veff in the cloudbow area are 2.33, 0.69 μm, and 0.020, respectively. Neglect of smoke aerosols above cloud leads to an underestimate of image-averaged COD by 15%.
Remote measurement of cloud microphysics and its influence in predicting high impact weather events
NASA Astrophysics Data System (ADS)
Bipasha, Paul S.; Jinya, John
2016-05-01
Understanding the cloud microphysical processes and precise retrieval of parameters governing the same are crucial for weather and climate prediction. Advanced remote sensing sensors and techniques offer an opportunity for monitoring micro-level developments in cloud structure. . Using the observations from a visible and near-infrared lidar onboard CALIPSO satellite (part of A-train) , the spatial variation of cloud structure has been studied over the Tropical monsoon region . It is found that there is large variability in the cloud microphysical parameters manifesting in distinct precipitation regimes. In particular, the severe storms over this region are driven by processes which range from the synoptic to the microphysical scale. Using INSAT-3D data, retrieval of cloud microphysical parameters like effective radius (CER) and optical depth (COD) were carried out for tropical cyclone Phailine. It was observed that there is a general increase of CER in a top-down direction, characterizing the progressively increasing number and size of precipitation hydrometeors while approaching the cloud base. The distribution of CER relative to cloud top temperature for growing convective clouds has been investigated to reveal the evolution of the particles composing the clouds. It is seen that the relatively high concentration of large particles in the downdraft zone is closely related to the precipitation efficiency of the system. Similar study was also carried using MODIS observations for cyclones over Indian Ocean (2010-2013), in which we find that that the mean effective radius is 24 microns with standard deviation 4.56, mean optical depth is 21 with standard deviation 13.98, mean cloud fraction is 0.92 with standard deviation 0.13 and mainly ice phase is dominant. Thus the remote observations of microstructure of convective storms provide very crucial information about the maintenance and potential devastation likely to be associated with it. With the synergistic observations from A-Train , geostationary and futuristic imaging spectroscopic sensors, a multi-dimensional, and multi-scalar exploration of cloud systems is anticipated leading to accurate prediction of high impact weather events.
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view from the southwest looking northeast, from an altitude just above the high haze layer. The streaks in the lower cloud leading towards the hotspot are visible. The upper haze layer is mostly flat, with notable small peaks that can be matched with features in the lower cloud. In reality, these areas may represent a continuous vertical cloud column.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the southeast, from between the cloud layers and over the north center of the region. The tall white clouds in the lower cloud deck are probably much like large terrestrial thunderclouds. They may be regions where atmospheric water powers vertical convection over large horizontal distances.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the west, from between the cloud layers and over the patchy white clouds to the east of the hotspot. This is probably an area where moist convection is occurring over large horizontal distances, similar to the atmosphere over the equatorial ocean on Earth. The clouds are high and thick, and are observed to change rapidly over short time scales.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Measurement of optical blurring in a turbulent cloud chamber
NASA Astrophysics Data System (ADS)
Packard, Corey D.; Ciochetto, David S.; Cantrell, Will H.; Roggemann, Michael C.; Shaw, Raymond A.
2016-10-01
Earth's atmosphere can significantly impact the propagation of electromagnetic radiation, degrading the performance of imaging systems. Deleterious effects of the atmosphere include turbulence, absorption and scattering by particulates. Turbulence leads to blurring, while absorption attenuates the energy that reaches imaging sensors. The optical properties of aerosols and clouds also impact radiation propagation via scattering, resulting in decorrelation from unscattered light. Models have been proposed for calculating a point spread function (PSF) for aerosol scattering, providing a method for simulating the contrast and spatial detail expected when imaging through atmospheres with significant aerosol optical depth. However, these synthetic images and their predicating theory would benefit from comparison with measurements in a controlled environment. Recently, Michigan Technological University (MTU) has designed a novel laboratory cloud chamber. This multiphase, turbulent "Pi Chamber" is capable of pressures down to 100 hPa and temperatures from -55 to +55°C. Additionally, humidity and aerosol concentrations are controllable. These boundary conditions can be combined to form and sustain clouds in an instrumented laboratory setting for measuring the impact of clouds on radiation propagation. This paper describes an experiment to generate mixing and expansion clouds in supersaturated conditions with salt aerosols, and an example of measured imagery viewed through the generated cloud is shown. Aerosol and cloud droplet distributions measured during the experiment are used to predict scattering PSF and MTF curves, and a methodology for validating existing theory is detailed. Measured atmospheric inputs will be used to simulate aerosol-induced image degradation for comparison with measured imagery taken through actual cloud conditions. The aerosol MTF will be experimentally calculated and compared to theoretical expressions. The key result of this study is the proposal of a closure experiment for verification of theoretical aerosol effects using actual clouds in a controlled laboratory setting.
CEDIMS: cloud ethical DICOM image Mojette storage
NASA Astrophysics Data System (ADS)
Guédon, Jeanpierre; Evenou, Pierre; Tervé, Pierre; David, Sylvain; Béranger, Jérome
2012-02-01
Dicom images of patients will necessarily been stored in Clouds. However, ethical constraints must apply. In this paper, a method which provides the two following conditions is presented: 1) the medical information is not readable by the cloud owner since it is distributed along several clouds 2) the medical information can be retrieved from any sufficient subset of clouds In order to obtain this result in a real time processing, the Mojette transform is used. This paper reviews the interesting features of the Mojette transform in terms of information theory. Since only portions of the original Dicom files are stored into each cloud, their contents are not reachable. For instance, we use 4 different public clouds to save 4 different projections of each file, with the additional condition that any 3 over 4 projections are enough to reconstruct the original file. Thus, even if a cloud is unavailable when the user wants to load a Dicom file, the other 3 are giving enough information for real time reconstruction. The paper presents an implementation on 3 actual clouds. For ethical reasons, we use a Dicom image spreaded over 3 public clouds to show the obtained confidentiality and possible real time recovery.
Transitioning ISR architecture into the cloud
NASA Astrophysics Data System (ADS)
Lash, Thomas D.
2012-06-01
Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.
HUBBLE FINDS MANY BRIGHT CLOUDS ON URANUS
NASA Technical Reports Server (NTRS)
2002-01-01
A recent Hubble Space Telescope view reveals Uranus surrounded by its four major rings and by 10 of its 17 known satellites. This false-color image was generated by Erich Karkoschka using data taken on August 8, 1998, with Hubble's Near Infrared Camera and Multi-Object Spectrometer. Hubble recently found about 20 clouds - nearly as many clouds on Uranus as the previous total in the history of modern observations. The orange-colored clouds near the prominent bright band circle the planet at more than 300 mph (500 km/h), according to team member Heidi Hammel (MIT). One of the clouds on the right-hand side is brighter than any other cloud ever seen on Uranus. The colors in the image indicate altitude. Team member Mark Marley (New Mexico State University) reports that green and blue regions show where the atmosphere is clear and sunlight can penetrate deep into Uranus. In yellow and grey regions the sunlight reflects from a higher haze or cloud layer. Orange and red colors indicate very high clouds, such as cirrus clouds on Earth. The Hubble image is one of the first images revealing the precession of the brightest ring with respect to a previous image [LINK to PRC97-36a]. Precession makes the fainter part of the ring (currently on the upper right-hand side) slide around Uranus once every nine months. The fading is caused by ring particles crowding and hiding each other on one side of their eight-hour orbit around Uranus. The blue, green and red components of this false-color image correspond to exposures taken at near-infrared wavelengths of 0.9, 1.1, and 1.7 micrometers. Thus, regions on Uranus appearing blue, for example, reflect more sunlight at 0.9 micrometer than at the longer wavelengths. Apparent colors on Uranus are caused by absorption of methane gas in its atmosphere, an effect comparable to absorption in our atmosphere which can make distant clouds appear red. Credit: Erich Karkoschka (University of Arizona) and NASA
2012-06-13
ISS031-E-116058 (13 June 2012) --- Polar mesospheric clouds in the Northern Hemisphere are featured in this image photographed by an Expedition 31 crew member on the International Space Station. In both the Northern and Southern Hemisphere, during their respective late spring and early summer seasons, polar mesospheric clouds are at the peak of their visibility. Visible from the ground during twilight, aircraft in flight, and the International Space Station, they typically appear as delicate shining threads against the darkness of space?hence their other name of noctilucent or ?night-shining? clouds. On the same day this image was taken from the space station while it was passing over the night-darkened Tibetan Plateau, polar mesospheric clouds were also visible to aircraft flying above Canada. In addition to this still image, the space station crew took a time-lapse image sequence of polar mesospheric clouds several days earlier (June 5, 2012) while passing over western Asia; this is first such sequence of images of the phenomena taken from orbit. Polar mesospheric clouds form between 76-85 kilometers above the Earth?s surface, when there is sufficient water vapor at these high altitudes to freeze into ice crystals. The clouds are illuminated by the setting sun while the ground surface below is in darkness, lending them their night-shining properties. In addition to the illuminated tracery of polar mesospheric clouds trending across the center of the image, lower layers of the atmosphere are also illuminated; the lowest layer of the atmosphere, the stratosphere, is indicated by dim orange and red tones. While the exact cause of formation of polar mesospheric clouds is still debated?dust from meteors, global warming, and rocket exhaust have all been suggested as contributing factors?recent research suggests that changes in atmospheric gas composition or temperature has caused the clouds to become brighter over time.
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail; Sinitsyn, Alexey; Gulev, Sergey
2014-05-01
Cloud fraction is a critical parameter for the accurate estimation of short-wave and long-wave radiation - one of the most important surface fluxes over sea and land. Massive estimates of the total cloud cover as well as cloud amount for different layers of clouds are available from visual observations, satellite measurements and reanalyses. However, these data are subject of different uncertainties and need continuous validation against highly accurate in-situ measurements. Sky imaging with high resolution fish eye camera provides an excellent opportunity for collecting cloud cover data supplemented with additional characteristics hardly available from routine visual observations (e.g. structure of cloud cover under broken cloud conditions, parameters of distribution of cloud dimensions). We present operational automatic observational package which is based on fish eye camera taking sky images with high resolution (up to 1Hz) in time and a spatial resolution of 968x648px. This spatial resolution has been justified as an optimal by several sensitivity experiments. For the use of the package at research vessel when the horizontal positioning becomes critical, a special extension of the hardware and software to the package has been developed. These modules provide the explicit detection of the optimal moment for shooting. For the post processing of sky images we developed a software realizing the algorithm of the filtering of sunburn effect in case of small and moderate could cover and broken cloud conditions. The same algorithm accurately quantifies the cloud fraction by analyzing color mixture for each point and introducing the so-called "grayness rate index" for every pixel. The accuracy of the algorithm has been tested using the data collected during several campaigns in 2005-2011 in the North Atlantic Ocean. The collection of images included more than 3000 images for different cloud conditions supplied with observations of standard parameters. The system is fully autonomous and has a block for digital data collection at the hard disk. The system has been tested for a wide range of open ocean cloud conditions and we will demonstrate some pilot results of data processing and physical interpretation of fractional cloud cover estimation.
Teaching Cybersecurity Using the Cloud
ERIC Educational Resources Information Center
Salah, Khaled; Hammoud, Mohammad; Zeadally, Sherali
2015-01-01
Cloud computing platforms can be highly attractive to conduct course assignments and empower students with valuable and indispensable hands-on experience. In particular, the cloud can offer teaching staff and students (whether local or remote) on-demand, elastic, dedicated, isolated, (virtually) unlimited, and easily configurable virtual machines.…
Measuring cloud thermodynamic phase with shortwave infrared imaging spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David R.; McCubbin, Ian; Gao, Bo Cai
Shortwave Infrared imaging spectroscopy enables accurate remote mapping of cloud thermodynamic phase at high spatial resolution. We describe a measurement strategy to exploit signatures of liquid and ice absorption in cloud top apparent reflectance spectra from 1.4 to 1.8 μm. This signal is generally insensitive to confounding factors such as solar angles, view angles, and surface albedo. We first evaluate the approach in simulation and then apply it to airborne data acquired in the Calwater-2/ACAPEX campaign of Winter 2015. Here NASA’s “Classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) remotely observed diverse cloud formations while the U.S. Department of Energy ARMmore » Aerial Facility G-1 aircraft measured cloud integral and microphysical properties in situ. Finally, the coincident measurements demonstrate good separation of the thermodynamic phases for relatively homogeneous clouds.« less
1990-02-19
Range : 60,000 miles These images are two versions of a near-infrafed map of lower-level clouds on the night side of Venus, obtained by the Near Infrared Mapping Spectrometer aboard the Galileo spacecraft.The map shows the turbulent, cloudy middle atmosphere some 30-33 miles above the surface, 6-10 miles below the visible cloudtops. The image to the left shows the radiant heat from the lower atmosphere (about 400 degrees F) ahining through the sulfuric acid clouds, which appear as much as 10 times darker than the bright gaps between clouds. This cloud layer is at about 170 degrees F, at a pressure about 1/2 Earth's atmospheric pressure. About 2/3 of the dark hemisphere is visible, centered on longitude 350 West, with bright slsivers of daylit high clouds visible at top and bottom left. The right image, a modified negative, represents what scientists believe would be the visual appearance of this mid-level cloud deck in daylight, with the clouds reflecting sunlight instead of clocking out infrared from the hot planet and lower atmosphere. Near the equator, the clouds appear fluffy and clocky; farther north, they are stretched out into East-West filaments by winds estimated at more than 150 mph, while the poles are capped by thick clouds at this altitude. The Near Infrared Mapping Spectrometer (NIMS) on the Galileo is a combined mapping (imaging) and spectral instrument. It can sense 408 contiguous wavelengths from 0.7 microns (deep red) to 5.2 microns, and can construct a map or image by mechanical scanning. It can spectroscopic-ally analyze atmospheres and surfaces and construct thermal and chemical maps.
1990-02-10
Range : 60,000 miles These images are two versions of a near-infrafed map of lower-level clouds on the night side of Venus, obtained by the Near Infrared Mapping Spectrometer aboard the Galileo spacecraft.The map shows the turbulent, cloudy middle atmosphere some 30-33 miles above the surface, 6-10 miles below the visible cloudtops. The image to the left shows the radiant heat from the lower atmosphere (about 400 degrees F) ahining through the sulfuric acid clouds, which appear as much as 10 times darker than the bright gaps between clouds. This cloud layer is at about 170 degrees F, at a pressure about 1/2 Earth's atmospheric pressure. About 2/3 of the dark hemisphere is visible, centered on longitude 350 West, with bright slsivers of daylit high clouds visible at top and bottom left. The right image, a modified negative, represents what scientists believe would be the visual appearance of this mid-level cloud deck in daylight, with the clouds reflecting sunlight instead of clocking out infrared from the hot planet and lower atmosphere. Near the equator, the clouds appear fluffy and clocky; farther north, they are stretched out into East-West filaments by winds estimated at more than 150 mph, while the poles are capped by thick clouds at this altitude. The Near Infrared Mapping Spectrometer (NIMS) on the Galileo is a combined mapping (imaging) and spectral instrument. It can sense 408 contiguous wavelengths from 0.7 microns (deep red) to 5.2 microns, and can construct a map or image by mechanical scanning. It can spectroscopic-ally analyze atmospheres and surfaces and construct thermal and chemical maps.
Coastal Fog, South Peruvian Coast at Pisco
NASA Technical Reports Server (NTRS)
2002-01-01
Coastal fog commonly drapes the Peruvian coast. This image captures complex interactions between land, sea, and atmosphere along the southern Peruvian coast. When Shuttle astronauts took the image in February of 2002, the layers of coastal fog and stratus were being progressively scoured away by brisk south to southeast winds. Remnants of the cloud deck banked against the larger, obstructing headlands like Peninsula Paracas and Isla Sangayan, giving the prominent 'white comma' effect. Southerlies also produced ripples of internal gravity waves in the clouds offshore where warm, dry air aloft interacts with a thinning layer of cool, moist air near the sea surface on the outer edge of the remaining cloud bank. South of Peninsula Baracas, the small headlands channeled the clouds into streaks-local horizontal vortices caused by the headlands provided enough lift to give points of origin of the clouds in some bays. Besides the shelter of the peninsula, the Bahia de Pisco appears to be cloud-free due to a dry, offshore flow down the valley of the Rio Ica. The STS-109 crew took image STS109-730-80 in February 2002. The image is provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth.
E4 True and false color hot spot mosaic
NASA Technical Reports Server (NTRS)
1997-01-01
True and false color views of Jupiter from NASA's Galileo spacecraft show an equatorial 'hotspot' on Jupiter. These images cover an area 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles). The top mosaic combines the violet and near infrared continuum filter images to create an image similar to how Jupiter would appear to human eyes. Differences in coloration are due to the composition and abundances of trace chemicals in Jupiter's atmosphere. The bottom mosaic uses Galileo's three near-infrared wavelengths displayed in red, green, and blue) to show variations in cloud height and thickness. Bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the deep cloud with an overlying thin haze. The light blue region to the left is covered by a very high haze layer. The multicolored region to the right has overlapping cloud layers of different heights. Galileo is the first spacecraft to distinguish cloud layers on Jupiter.
North is at the top. The mosaic covers latitudes 1 to 10 degrees and is centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging camera system aboard Galileo. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at: http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at: http:/ /www.jpl.nasa.gov/galileo/sepo.NASA Astrophysics Data System (ADS)
de Michele, Marcello; Raucoules, Daniel; Corradini, Stefano; Merucci, Luca; spinetti, claudia
2017-04-01
Accurate and spatially-detailed knowledge of Volcanic Cloud Top Height (VCTH) and velocity is crucial in volcanology. As an example, the ash/gas dispersion in the atmosphere, their impact and lifetime around the globe, greatly depends on the injection altitude. The VCTH is critical for ash dispersion modelling and air traffic security. Furthermore, the volcanic plume height during explosive volcanism is the primary parameter for estimating mass eruption rate. Satellite remote sensing offers a comprehensive and safe way to estimate VCTH. Recently, it has been shown that high spatial resolution optical imagery from Landsat-8 OLI sensor can be used to extract Volcanic Cloud Top Height with a precision of 250 meters and an accuracy or 300m (de Michele et al., 2016). This method allows to extract a Plume Elevation Model (PEM) by jointly measuring the parallax between two optical bands acquired with a time lag varying from 0.1 to 2.5 seconds depending on the bands chosen and the sensors employed. The measure of the parallax is biased because the volcanic cloud is moving between the two images acquisitions, even if the time lag is short. The precision of our measurements is enhanced by compensating the parallax by measuring the velocity of the volcanic cloud in the perpendicular-to-epipolar direction (which is height independent) and correcting the initial parallax measurement. In this study, we push this methodology forward. We apply it to the very high spatial resolution Pleiades data (1m pixel spacing) provided by the French Space Agency (CNES). We apply the method on Mount Etna, during the 05 September 2015 eruptive episode and on Mount Ontake eruption occurring on 30 September 2014. We are able to extract VCTH as a PEM with high spatial resolution and improved precision. Since Pléiades has an improved revisit time (1day), our method has potential for routine monitoring of volcanic plumes in clear sky conditions and when the VCTH is higher than meteo clouds.
2009-06-03
Lots of clouds are visible in this infrared image of Saturn's moon Titan. These clouds form and move much like those on Earth, but in a much slower, more lingering fashion, new results from NASA's Cassini spacecraft show. Scientists have monitored Titan's atmosphere for three-and-a-half years, between July 2004 and December 2007, and observed more than 200 clouds. The way these clouds are distributed around Titan matches scientists' global circulation models. The only exception is timing—clouds are still noticeable in the southern hemisphere while fall is approaching. Three false-color images make up this mosaic and show the clouds at 40 to 50 degrees mid-latitude. The images were taken by Cassini's visual and infrared mapping spectrometer during a close flyby of Titan on Sept. 7, 2006, known as T17. For a similar view see PIA12005. Each image is a color composite, with red shown at the 2-micron wavelength, green at 1.6 microns, and blue at 2.8 microns. An infrared color mosaic is also used as a background (red at 5 microns, green at 2 microns and blue at 1.3 microns). The characteristic elongated mid-latitude clouds, which are easily visible in bright bluish tones are still active even late into 2006-2007. According to climate models, these clouds should have faded out since 2005. http://photojournal.jpl.nasa.gov/catalog/PIA12004
Marine Boundary Layer Cloud Properties From AMF Point Reyes Satellite Observations
NASA Technical Reports Server (NTRS)
Jensen, Michael; Vogelmann, Andrew M.; Luke, Edward; Minnis, Patrick; Miller, Mark A.; Khaiyer, Mandana; Nguyen, Louis; Palikonda, Rabindra
2007-01-01
Cloud Diameter, C(sub D), offers a simple measure of Marine Boundary Layer (MBL) cloud organization. The diurnal cycle of cloud-physical properties and C(sub D) at Pt Reyes are consistent with previous work. The time series of C(sub D) can be used to identify distinct mesoscale organization regimes within the Pt. Reyes observation period.
Preparation of Ultracold Atom Clouds at the Shot Noise Level.
Gajdacz, M; Hilliard, A J; Kristensen, M A; Pedersen, P L; Klempt, C; Arlt, J J; Sherson, J F
2016-08-12
We prepare number stabilized ultracold atom clouds through the real-time analysis of nondestructive images and the application of feedback. In our experiments, the atom number N∼10^{6} is determined by high precision Faraday imaging with uncertainty ΔN below the shot noise level, i.e., ΔN
2009-06-03
This infrared image of Saturn's moon Titan shows a large burst of clouds in the moon's south polar region. These clouds form and move much like those on Earth, but in a much slower, more lingering fashion, new results from NASA's Cassini Spacecraft show. This image is a color composite, with red shown at a 5-micron wavelength, green at 2.7 microns, and blue at 2 microns. An infrared color mosaic is also used as a background image (red at 5 microns, green at 2 microns, blue at 1.3 microns). The images were taken by Cassini's visual and infrared mapping spectrometer during a flyby of Titan on March 26, 2007, known as T27. For a similar view see PIA12004. Titan's southern hemisphere still shows a very active meteorology (the cloud appears in white-reddish tones) even in 2007. According to climate models, these clouds should have faded out since 2005. Scientists have monitored Titan's atmosphere for three-and-a-half years, between July 2004 and December 2007, and observed more than 200 clouds. The way these clouds are distributed around Titan matches scientists' global circulation models. The only exception is timing—clouds are still noticeable in the southern hemisphere while fall is approaching. http://photojournal.jpl.nasa.gov/catalog/PIA12005
Long-term Behaviour Of Venus Winds At Cloud Level From Virtis/vex Observations
NASA Astrophysics Data System (ADS)
Hueso, Ricardo; Peralta, J.; Sánchez-Lavega, A.; Pérez-Hoyos, S.; Piccioni, G.; Drossart, P.
2009-09-01
The Venus Express (VEX) mission has been in orbit to Venus for more than three years now. The VIRTIS instrument onboard VEX observes Venus in two channels (visible and infrared) obtaining spectra and multi-wavelength images of the planet. Images in the ultraviolet range are used to study the upper cloud at 66 km while images in the infrared (1.74 μm) map the opacity of the lower cloud deck at 48 km. Here we present an analysis of the overall dynamics of Venus’ atmosphere at both levels using observations that cover a large fraction of the VIRTIS dataset. We will present our latest results concerning the zonal winds, the overall stability in the lower cloud deck motions and the variability in the upper cloud. Meridional winds are also observed in the upper and lower cloud in the UV and IR images obtained with VIRTIS. While the upper clouds present a net meridional motion consistent with the upper branch of a Hadley cell the lower cloud present more irregular, variable and less intense motions in the meridional direction. Acknowledgements This work has been funded by Spanish MEC AYA2006-07735 with FEDER support and Grupos Gobierno Vasco IT-464-07. RH acknowledges a "Ramón y Cajal” contract from MEC.
Multilayered Clouds Identification and Retrieval for CERES Using MODIS
NASA Technical Reports Server (NTRS)
Sun-Mack, Sunny; Minnis, Patrick; Chen, Yan; Yi, Yuhong; Huang, Jainping; Lin, Bin; Fan, Alice; Gibson, Sharon; Chang, Fu-Lung
2006-01-01
Traditionally, analyses of satellite data have been limited to interpreting the radiances in terms of single layer clouds. Generally, this results in significant errors in the retrieved properties for multilayered cloud systems. Two techniques for detecting overlapped clouds and retrieving the cloud properties using satellite data are explored to help address the need for better quantification of cloud vertical structure. The first technique was developed using multispectral imager data with secondary imager products (infrared brightness temperature differences, BTD). The other method uses microwave (MWR) data. The use of BTD, the 11-12 micrometer brightness temperature difference, in conjunction with tau, the retrieved visible optical depth, was suggested by Kawamoto et al. (2001) and used by Pavlonis et al. (2004) as a means to detect multilayered clouds. Combining visible (VIS; 0.65 micrometer) and infrared (IR) retrievals of cloud properties with microwave (MW) retrievals of cloud water temperature Tw and liquid water path LWP retrieved from satellite microwave imagers appears to be a fruitful approach for detecting and retrieving overlapped clouds (Lin et al., 1998, Ho et al., 2003, Huang et al., 2005). The BTD method is limited to optically thin cirrus over low clouds, while the MWR method is limited to ocean areas only. With the availability of VIS and IR data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and MW data from the Advanced Microwave Scanning Radiometer EOS (AMSR-E), both on Aqua, it is now possible to examine both approaches simultaneously. This paper explores the use of the BTD method as applied to MODIS and AMSR-E data taken from the Aqua satellite over non-polar ocean surfaces.
Tropical Depression Debbie in the Atlantic
2006-08-22
These images show Tropical Depression Debbie in the Atlantic, from the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite on August 22, 2006. This AIRS image shows the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. The infrared signal does not penetrate through clouds. Where there are no clouds the AIRS instrument reads the infrared signal from the surface of the Earth, revealing warmer temperatures (red). At the time the data were taken from which these images were made the eye had not yet opened but the storm is now well organized. The location of the future eye appears as a circle at 275 K brightness temperature in the microwave image just to the SE of the Azores. http://photojournal.jpl.nasa.gov/catalog/PIA00508
Li, Weilin; Wen, Jian; Xiao, Zhongliang; Xu, Shengxia
2018-02-22
To assess the health conditions of tree trunks, it is necessary to estimate the layers and anomalies of their internal structure. The main objective of this paper is to investigate the internal part of tree trunks considering their irregular contour. In this respect, we used ground penetrating radar (GPR) for non-invasive detection of defects and deteriorations in living trees trunks. The Hilbert transform algorithm and the reflection amplitudes were used to estimate the relative dielectric constant. The point cloud data technique was applied as well to extract the irregular contours of trunks. The feasibility and accuracy of the methods were examined through numerical simulations, laboratory and field measurements. The results demonstrated that the applied methodology allowed for accurate characterizations of the internal inhomogeneity. Furthermore, the point cloud technique resolved the trunk well by providing high-precision coordinate information. This study also demonstrated that cross-section tomography provided images with high resolution and accuracy. These integrated techniques thus proved to be promising for observing tree trunks and other cylindrical objects. The applied approaches offer a great promise for future 3D reconstruction of tomographic images with radar wave.
UAS-based automatic bird count of a common gull colony
NASA Astrophysics Data System (ADS)
Grenzdörffer, G. J.
2013-08-01
The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.
NASA Technical Reports Server (NTRS)
Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.
2012-01-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.
NASA Astrophysics Data System (ADS)
Smith, W. L., Jr.; Spangenberg, D.; Fleeger, C.; Sun-Mack, S.; Chen, Y.; Minnis, P.
2016-12-01
Determining accurate cloud properties horizontally and vertically over a full range of time and space scales is currently next to impossible using data from either active or passive remote sensors or from modeling systems. Passive satellite imagers provide horizontal and temporal resolution of clouds, but little direct information on vertical structure. Active sensors provide vertical resolution but limited spatial and temporal coverage. Cloud models embedded in NWP can produce realistic clouds but often not at the right time or location. Thus, empirical techniques that integrate information from multiple observing and modeling systems are needed to more accurately characterize clouds and their impacts. Such a strategy is employed here in a new cloud water content profiling technique developed for application to satellite imager cloud retrievals based on VIS, IR and NIR radiances. Parameterizations are developed to relate imager retrievals of cloud top phase, optical depth, effective radius and temperature to ice and liquid water content profiles. The vertical structure information contained in the parameterizations is characterized climatologically from cloud model analyses, aircraft observations, ground-based remote sensing data, and from CloudSat and CALIPSO. Thus, realistic cloud-type dependent vertical structure information (including guidance on cloud phase partitioning) circumvents poor assumptions regarding vertical homogeneity that plague current passive satellite retrievals. This paper addresses mixed phase cloud conditions for clouds with glaciated tops including those associated with convection and mid-latitude storm systems. Novel outcomes of our approach include (1) simultaneous retrievals of ice and liquid water content and path, which are validated with active sensor, microwave and in-situ data, and yield improved global cloud climatologies, and (2) new estimates of super-cooled LWC, which are demonstrated in aviation safety applications and validated with icing PIREPS. The initial validation is encouraging for single-layer cloud conditions. More work is needed to test and refine the method for global application in a wider range of cloud conditions. A brief overview of our current method, applications, verification, and plans for future work will be presented.
Virtual Business Operating Environment in the Cloud: Conceptual Architecture and Challenges
NASA Astrophysics Data System (ADS)
Nezhad, Hamid R. Motahari; Stephenson, Bryan; Singhal, Sharad; Castellanos, Malu
Advances in service oriented architecture (SOA) have brought us close to the once imaginary vision of establishing and running a virtual business, a business in which most or all of its business functions are outsourced to online services. Cloud computing offers a realization of SOA in which IT resources are offered as services that are more affordable, flexible and attractive to businesses. In this paper, we briefly study advances in cloud computing, and discuss the benefits of using cloud services for businesses and trade-offs that they have to consider. We then present 1) a layered architecture for the virtual business, and 2) a conceptual architecture for a virtual business operating environment. We discuss the opportunities and research challenges that are ahead of us in realizing the technical components of this conceptual architecture. We conclude by giving the outlook and impact of cloud services on both large and small businesses.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
The EOS CERES Global Cloud Mask
NASA Technical Reports Server (NTRS)
Berendes, T. A.; Welch, R. M.; Trepte, Q.; Schaaf, C.; Baum, B. A.
1996-01-01
To detect long-term climate trends, it is essential to produce long-term and consistent data sets from a variety of different satellite platforms. With current global cloud climatology data sets, such as the International Satellite Cloud Climatology Experiment (ISCCP) or CLAVR (Clouds from Advanced Very High Resolution Radiometer), one of the first processing steps is to determine whether an imager pixel is obstructed between the satellite and the surface, i.e., determine a cloud 'mask.' A cloud mask is essential to studies monitoring changes over ocean, land, or snow-covered surfaces. As part of the Earth Observing System (EOS) program, a series of platforms will be flown beginning in 1997 with the Tropical Rainfall Measurement Mission (TRMM) and subsequently the EOS-AM and EOS-PM platforms in following years. The cloud imager on TRMM is the Visible/Infrared Sensor (VIRS), while the Moderate Resolution Imaging Spectroradiometer (MODIS) is the imager on the EOS platforms. To be useful for long term studies, a cloud masking algorithm should produce consistent results between existing (AVHRR) data, and future VIRS and MODIS data. The present work outlines both existing and proposed approaches to detecting cloud using multispectral narrowband radiance data. Clouds generally are characterized by higher albedos and lower temperatures than the underlying surface. However, there are numerous conditions when this characterization is inappropriate, most notably over snow and ice of the cloud types, cirrus, stratocumulus and cumulus are the most difficult to detect. Other problems arise when analyzing data from sun-glint areas over oceans or lakes over deserts or over regions containing numerous fires and smoke. The cloud mask effort builds upon operational experience of several groups that will now be discussed.
2008-05-27
Bright puffs and ribbons of cloud drift lazily through Saturn's murky skies. In contrast to the bold red, orange and white clouds of Jupiter, Saturn's clouds are overlain by a thick layer of haze. The visible cloud tops on Saturn are deeper in its atmosphere due to the planet's cooler temperatures. This view looks toward the unilluminated side of the rings from about 18 degrees above the ringplane. Images taken using red, green and blue spectral filters were combined to create this natural color view. The images were acquired with the Cassini spacecraft wide-angle camera on April 15, 2008 at a distance of approximately 1.5 million kilometers (906,000 miles) from Saturn. Image scale is 84 kilometers (52 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA09910
Remote Sensing of Water Vapor and Thin Cirrus Clouds using MODIS Near-IR Channels
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Kaufman, Yoram J.
2001-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS), a major facility instrument on board the Terra Spacecraft, was successfully launched into space in December of 1999. MODIS has several near-IR channels within and around the 0.94 micrometer water vapor bands for remote sensing of integrated atmospheric water vapor over land and above clouds. MODIS also has a special near-IR channel centered at 1.375-micron with a width of 30 nm for remote sensing of cirrus clouds. In this paper, we describe briefly the physical principles on remote sensing of water vapor and cirrus clouds using these channels. We also present sample water vapor images and cirrus cloud images obtained from MODIS data.
NASA Astrophysics Data System (ADS)
Kim, Hye-Won; Yeom, Jong-Min; Shin, Daegeun; Choi, Sungwon; Han, Kyung-Soo; Roujean, Jean-Louis
2017-08-01
In this study, a new assessment of thin cloud detection with the application of bidirectional reflectance distribution function (BRDF) model-based background surface reflectance was undertaken by interpreting surface spectra characterized using the Geostationary Ocean Color Imager (GOCI) over a land surface area. Unlike cloud detection over the ocean, the detection of cloud over land surfaces is difficult due to the complicated surface scattering characteristics, which vary among land surface types. Furthermore, in the case of thin clouds, in which the surface and cloud radiation are mixed, it is difficult to detect the clouds in both land and atmospheric fields. Therefore, to interpret background surface reflectance, especially underneath cloud, the semiempirical BRDF model was used to simulate surface reflectance by reflecting solar angle-dependent geostationary sensor geometry. For quantitative validation, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data were used to make a comparison with the proposed cloud masking result. As a result, the new cloud masking scheme resulted in a high probability of detection (POD = 0.82) compared with the Moderate Resolution Imaging Spectroradiometer (MODIS) (POD = 0.808) for all cloud cases. In particular, the agreement between the CALIPSO cloud product and new GOCI cloud mask was over 94% when detecting thin cloud (e.g., altostratus and cirrus) from January 2014 to June 2015. This result is relatively high in comparison with the result from the MODIS Collection 6 cloud mask product (MYD35).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trepte, Q.Z.; Minnis, P.; Heck, P.W.
2005-03-18
Cloud detection using satellite measurements presents a big challenge near the terminator where the visible (VIS; 0.65 {micro}m) channel becomes less reliable and the reflected solar component of the solar infrared 3.9-{micro}m channel reaches very low signal-to-noise ratio levels. As a result, clouds are underestimated near the terminator and at night over land and ocean in previous Atmospheric Radiation Measurement (ARM) Program cloud retrievals using Geostationary Operational Environmental Satellite (GOES) imager data. Cloud detection near the terminator has always been a challenge. For example, comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer [AVHRR]) cloud coverage and Geosciencemore » Laser Altimeter System (GLAS) measurements north of 60{sup o}N indicate significant amounts of missing clouds from AVHRR because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products and GLAS at the same regions also shows the same difficulty in the MODIS cloud retrieval (Pavolonis and Heidinger 2005). Consistent detection of clouds at all times of day is needed to provide reliable cloud and radiation products for ARM and other research efforts involving the modeling of clouds and their interaction with the radiation budget. To minimize inconsistencies between daytime and nighttime retrievals, this paper develops an improved twilight and nighttime cloud mask using GOES-9, 10, and 12 imager data over the ARM sites and the continental United States (CONUS).« less
NASA Technical Reports Server (NTRS)
Trepte, Q. Z.; Minnis, P.; Heck, R. W.; Palikonda, R.
2005-01-01
Cloud detection using satellite measurements presents a big challenge near the terminator where the visible (VIS; 0.65 (micro)m) channel becomes less reliable and the reflected solar component of the solar infrared 3.9-(micro)m channel reaches very low signal-to-noise ratio levels. As a result, clouds are underestimated near the terminator and at night over land and ocean in previous Atmospheric Radiation Measurement (ARM) Program cloud retrievals using Geostationary Operational Environmental Satellite (GOES) imager data. Cloud detection near the terminator has always been a challenge. For example, comparisons between the CLAVR-x (Clouds from Advanced Very High Resolution Radiometer (AVHRR)) cloud coverage and Geoscience Laser Altimeter System (GLAS) measurements north of 60 degrees N indicate significant amounts of missing clouds from AVHRR because this part of the world was near the day/night terminator viewed by AVHRR. Comparisons between MODIS cloud products and GLAS at the same regions also shows the same difficulty in the MODIS cloud retrieval (Pavolonis and Heidinger 2005). Consistent detection of clouds at all times of day is needed to provide reliable cloud and radiation products for ARM and other research efforts involving the modeling of clouds and their interaction with the radiation budget. To minimize inconsistencies between daytime and nighttime retrievals, this paper develops an improved twilight and nighttime cloud mask using GOES-9, 10, and 12 imager data over the ARM sites and the continental United States (CONUS).
Arctic Clouds Infrared Imaging Field Campaign Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, J. A.
2016-03-01
The Infrared Cloud Imager (ICI), a passive thermal imaging system, was deployed at the North Slope of Alaska site in Barrow, Alaska, from July 2012 to July 2014 for measuring spatial-temporal cloud statistics. Thermal imaging of the sky from the ground provides high radiometric contrast during night and polar winter when visible sensors and downward-viewing thermal sensors experience low contrast. In addition to demonstrating successful operation in the Arctic for an extended period and providing data for Arctic cloud studies, a primary objective of this deployment was to validate novel instrument calibration algorithms that will allow more compact ICI instrumentsmore » to be deployed without the added expense, weight, size, and operational difficulty of a large-aperture onboard blackbody calibration source. This objective was successfully completed with a comparison of the two-year data set calibrated with and without the onboard blackbody. The two different calibration methods produced daily-average cloud amount data sets with correlation coefficient = 0.99, mean difference = 0.0029 (i.e., 0.29% cloudiness), and a difference standard deviation = 0.054. Finally, the ICI instrument generally detected more thin clouds than reported by other ARM cloud products available as of late 2015.« less
Ground-based cloud classification by learning stable local binary patterns
NASA Astrophysics Data System (ADS)
Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua
2018-07-01
Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Lagrosas, N.
2017-12-01
Cloud detection nowadays is primarily achieved by the utilization of various sensors aboard satellites. These include MODIS Aqua, MODIS Terra, and AIRS with products that include nighttime cloud fraction. Ground-based instruments are, however, only secondary to these satellites when it comes to cloud detection. Nonetheless, these ground-based instruments (e.g., LIDARs, ceilometers, and sky-cameras) offer significant datasets about a particular region's cloud cover values. For nighttime operations of cloud detection instruments, satellite-based instruments are more reliably and prominently used than ground-based ones. Therefore if a ground-based instrument for nighttime operations is operated, it ought to produce reliable scientific datasets. The objective of this study is to do a comparison between the results of a nighttime ground-based instrument (sky-camera) and that of MODIS Aqua and MODIS Terra. A Canon Powershot A2300 is placed ontop of Manila Observatory (14.64N, 121.07E) and is configured to take images of the night sky at 5min intervals. To detect pixels with clouds, the pictures are converted to grayscale format. Thresholding technique is used to screen pixels with cloud and pixels without clouds. If the pixel value is greater than 17, it is considered as a cloud; otherwise, a noncloud (Gacal et al., 2016). This algorithm is applied to the data gathered from Oct 2015 to Oct 2016. A scatter plot between satellite cloud fraction in the area covering the area 14.2877N, 120.9869E, 14.7711N and 121.4539E and ground cloud cover is graphed to find the monthly correlation. During wet season (June - November), the satellite nighttime cloud fraction vs ground measured cloud cover produce an acceptable R2 (Aqua= 0.74, Terra= 0.71, AIRS= 0.76). However, during dry season, poor R2 values are obtained (AIRS= 0.39, Aqua & Terra = 0.01). The high correlation during wet season can be attributed to a high probability that the camera and satellite see the same clouds. However during dry season, the satellite sees high altitude clouds and the camera can not detect these clouds from the ground as it relies on city lights reflected from low level clouds. With this acknowledged disparity, the ground-based camera has the advantage of detecting haze and thin clouds near the ground that are hardly or not detected by the satellites.
Morning Clouds Atop Martian Mountain
2015-06-19
Seen shortly after local Martian sunrise, clouds gather in the summit pit, or caldera, of Pavonis Mons, a giant volcano on Mars, in this image from the Thermal Emission Imaging System (THEMIS) on NASA's Mars Odyssey orbiter. The clouds are mostly made of ice crystals. They appear blue in the image because the cloud particles scatter blue light more strongly than other colors. Pavonis Mons stands about nine miles (14 kilometers) high, and the caldera spans about 29 miles (47 kilometers) wide. This image was made by THEMIS through three of its visual-light filters plus a near-infrared filter, and it is approximately true in color. THEMIS and other instruments on Mars Odyssey have been studying Mars from orbit since 2001. http://photojournal.jpl.nasa.gov/catalog/PIA19675
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction
NASA Astrophysics Data System (ADS)
Su, X.
2017-12-01
A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.
The benefit of limb cloud imaging for tropospheric infrared limb sounding
NASA Astrophysics Data System (ADS)
Adams, S.; Spang, R.; Preusse, P.; Heinemann, G.
2009-03-01
Advances in detector technology enable a new generation of infrared limb sounders to measure 2-D images of the atmosphere. A proposed limb cloud imager (LCI) mode will measure clouds with very high spatial resolution. For the inference of temperature and trace gas distributions, detector pixels of the LCI have to be combined into super-pixels which provide the required signal-to-noise ratio and information content for the retrievals. This study examines the extent to which tropospheric coverage can be improved in comparison to limb sounding using a fixed field of view with the size of the super-pixels, as in conventional limb sounders. The study is based on cloud topographies derived from (a) IR brightness temperatures (BT) of geostationary weather satellites in conjunction with ECMWF temperature profiles and (b) ice and liquid water content data of the Consortium for Small-scale Modeling-Europe (COSMO-EU) of the German Weather Service. Limb cloud images are simulated by matching the cloud topography with the limb sounding line of sight (LOS). The analysis of the BT data shows that the reduction of the spatial sampling along the track has hardly any effect on the gain in information. The comparison between BT and COSMO-EU data identifies the strength of both data sets, which are the representation of the horizontal cloud extent for the BT data and the reproduction of the cloud amount for the COSMO-EU data. The results of the analysis of both data sets show the great advantage of the cloud imager. However, because both cloud data sets do not present the complete fine structure of the real cloud fields in the atmosphere it is assumed that the results tend to underestimate the increase in information. In conclusion, real measurements by such an instrument may result in an even higher benefit for tropospheric limb retrievals.
The benefit of limb cloud imaging for infrared limb sounding of tropospheric trace gases
NASA Astrophysics Data System (ADS)
Adams, S.; Spang, R.; Preusse, P.; Heinemann, G.
2009-06-01
Advances in detector technology enable a new generation of infrared limb sounders to measure 2-D images of the atmosphere. A proposed limb cloud imager (LCI) mode will detect clouds with a spatial resolution unprecedented for limb sounding. For the inference of temperature and trace gas distributions, detector pixels of the LCI have to be combined into super-pixels which provide the required signal-to-noise and information content for the retrievals. This study examines the extent to which tropospheric coverage can be improved in comparison to limb sounding using a fixed field of view with the size of the super-pixels, as in conventional limb sounders. The study is based on cloud topographies derived from (a) IR brightness temperatures (BT) of geostationary weather satellites in conjunction with ECMWF temperature profiles and (b) ice and liquid water content data of the Consortium for Small-scale Modeling-Europe (COSMO-EU) of the German Weather Service. Limb cloud images are simulated by matching the cloud topography with the limb sounding line of sight (LOS). The analysis of the BT data shows that the reduction of the spatial sampling along the track has hardly any effect on the gain in information. The comparison between BT and COSMO-EU data identifies the strength of both data sets, which are the representation of the horizontal cloud extent for the BT data and the reproduction of the cloud amount for the COSMO-EU data. The results of the analysis of both data sets show the great advantage of the cloud imager. However, because both cloud data sets do not present the complete fine structure of the real cloud fields in the atmosphere it is assumed that the results tend to underestimate the increase in information. In conclusion, real measurements by such an instrument may result in an even higher benefit for tropospheric limb retrievals.
Clouds Sailing Overhead on Mars, Unenhanced
2017-08-09
Wispy clouds float across the Martian sky in this accelerated sequence of images from NASA's Curiosity Mars rover. The rover's Navigation Camera (Navcam) took these eight images over a span of four minutes early in the morning of the mission's 1,758th Martian day, or sol (July 17, 2017), aiming nearly straight overhead. This sequence uses raw images, which include a bright ring around the center of the frame that is an artifact of sunlight striking the camera lens even though the Sun is not in the shot. A processed version removing that artifact and emphasizing changes between images is also available. The clouds resemble Earth's cirrus clouds, which are ice crystals at high altitudes. These Martian clouds are likely composed of crystals of water ice that condense onto dust grains in the cold Martian atmosphere. Cirrus wisps appear as ice crystals fall and evaporate in patterns known as "fall streaks" or "mare's tails." Such patterns have been seen before at high latitudes on Mars, for instance by the Phoenix Mars Lander in 2008, and seasonally nearer the equator, for instance by the Opportunity rover. However, Curiosity has not previously observed such clouds so clearly visible from the rover's study area about five degrees south of the equator. The Hubble Space Telescope and spacecraft orbiting Mars have observed a band of clouds to appear near the Martian equator around the time of the Martian year when the planet is farthest from the Sun. With a more elliptical orbit than Earth's, Mars experiences more annual variation than Earth in its distance from the Sun. The most distant point in an orbit around the Sun is called the aphelion. The near-equatorial Martian cloud pattern observed at that time of year is called the "aphelion cloud belt." These new images from Curiosity were taken about two months before aphelion, but the morning clouds observed may be an early stage of the aphelion cloud belt. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21842
Cloud masking and removal in remote sensing image time series
NASA Astrophysics Data System (ADS)
Gómez-Chova, Luis; Amorós-López, Julia; Mateo-García, Gonzalo; Muñoz-Marí, Jordi; Camps-Valls, Gustau
2017-01-01
Automatic cloud masking of Earth observation images is one of the first required steps in optical remote sensing data processing since the operational use and product generation from satellite image time series might be hampered by undetected clouds. The high temporal revisit of current and forthcoming missions and the scarcity of labeled data force us to cast cloud screening as an unsupervised change detection problem in the temporal domain. We introduce a cloud screening method based on detecting abrupt changes along the time dimension. The main assumption is that image time series follow smooth variations over land (background) and abrupt changes will be mainly due to the presence of clouds. The method estimates the background surface changes using the information in the time series. In particular, we propose linear and nonlinear least squares regression algorithms that minimize both the prediction and the estimation error simultaneously. Then, significant differences in the image of interest with respect to the estimated background are identified as clouds. The use of kernel methods allows the generalization of the algorithm to account for higher-order (nonlinear) feature relations. After the proposed cloud masking and cloud removal, cloud-free time series at high spatial resolution can be used to obtain a better monitoring of land cover dynamics and to generate more elaborated products. The method is tested in a dataset with 5-day revisit time series from SPOT-4 at high resolution and with Landsat-8 time series. Experimental results show that the proposed method yields more accurate cloud masks when confronted with state-of-the-art approaches typically used in operational settings. In addition, the algorithm has been implemented in the Google Earth Engine platform, which allows us to access the full Landsat-8 catalog and work in a parallel distributed platform to extend its applicability to a global planetary scale.
Progress towards NASA MODIS and Suomi NPP Cloud Property Data Record Continuity
NASA Astrophysics Data System (ADS)
Platnick, S.; Meyer, K.; Holz, R.; Ackerman, S. A.; Heidinger, A.; Wind, G.; Platnick, S. E.; Wang, C.; Marchant, B.; Frey, R.
2017-12-01
The Suomi NPP VIIRS imager provides an opportunity to extend the 17+ year EOS MODIS climate data record into the next generation operational era. Similar to MODIS, VIIRS provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with the MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used for high cloud detection and cloud-top property retrievals. In addition, there is a significant mismatch in the spectral location of the 2.2 μm shortwave-infrared channels used for cloud optical/microphysical retrievals and cloud thermodynamic phase. Given these instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, a merged MODIS-VIIRS cloud record to serve the science community in the coming decades requires different algorithm approaches than those used for MODIS alone. This new approach includes two parallel efforts: (1) Imager-only algorithms with only spectral channels common to VIIRS and MODIS (i.e., eliminate use of MODIS CO2 and NIR/IR water vapor channels). Since the algorithms are run with similar spectral observations, they provide a basis for establishing a continuous cloud data record across the two imagers. (2) Merged imager and sounder measurements (i.e.., MODIS-AIRS, VIIRS-CrIS) in lieu of higher-spatial resolution MODIS absorption channels absent on VIIRS. The MODIS-VIIRS continuity algorithm for cloud optical property retrievals leverages heritage algorithms that produce the existing MODIS cloud mask (MOD35), optical and microphysical properties product (MOD06), and the NOAA AWG Cloud Height Algorithm (ACHA). We discuss our progress towards merging the MODIS observational record with VIIRS in order to generate cloud optical property climate data record continuity across the observing systems. In addition, we summarize efforts to reconcile apparent radiometric biases between analogous imager channels, a critical consideration for obtaining inter-sensor climate data record continuity.
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] Click on the image for movie of Ammonia Ice Clouds on Jupiter In this movie, put together from false-color images taken by the New Horizons Ralph instrument as the spacecraft flew past Jupiter in early 2007, show ammonia clouds (appearing as bright blue areas) as they form and disperse over five successive Jupiter 'days.' Scientists noted how the larger cloud travels along with a small, local deep hole.Benefits of cloud computing for PACS and archiving.
Koch, Patrick
2012-01-01
The goal of cloud-based services is to provide easy, scalable access to computing resources and IT services. The healthcare industry requires a private cloud that adheres to government mandates designed to ensure privacy and security of patient data while enabling access by authorized users. Cloud-based computing in the imaging market has evolved from a service that provided cost effective disaster recovery for archived data to fully featured PACS and vendor neutral archiving services that can address the needs of healthcare providers of all sizes. Healthcare providers worldwide are now using the cloud to distribute images to remote radiologists while supporting advanced reading tools, deliver radiology reports and imaging studies to referring physicians, and provide redundant data storage. Vendor managed cloud services eliminate large capital investments in equipment and maintenance, as well as staffing for the data center--creating a reduction in total cost of ownership for the healthcare provider.
Photographer : JPL Range : 3.4 million km This pair of images shows two of the long-lived white oval
NASA Technical Reports Server (NTRS)
1979-01-01
Photographer : JPL Range : 3.4 million km This pair of images shows two of the long-lived white oval clouds which have resided in the Jovian southern hemisphere for nearly 40 years. The upper picture shows the cloud that is at a longitude west of the Great Red Spot, and the lower frame, the cloud at a longitude east of this feature. The third oval is currently just south of the Great Red Spot. The clouds show very similar internal structures. To the east of each of them, recirculation currents are clearly seen. In the lower frame, a similar structure is seen to the west of the cloud. Although a recirculation current is associated with the upper western region of the cloud, it is further away from this feature and not seen in the image. This photo was taken by Voyager 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudkevich, Aleksandr; Goldis, Evgeniy
This research conducted by the Newton Energy Group, LLC (NEG) is dedicated to the development of pCloud: a Cloud-based Power Market Simulation Environment. pCloud is offering power industry stakeholders the capability to model electricity markets and is organized around the Software as a Service (SaaS) concept -- a software application delivery model in which software is centrally hosted and provided to many users via the internet. During the Phase I of this project NEG developed a prototype design for pCloud as a SaaS-based commercial service offering, system architecture supporting that design, ensured feasibility of key architecture's elements, formed technological partnershipsmore » and negotiated commercial agreements with partners, conducted market research and other related activities and secured funding for continue development of pCloud between the end of Phase I and beginning of Phase II, if awarded. Based on the results of Phase I activities, NEG has established that the development of a cloud-based power market simulation environment within the Windows Azure platform is technologically feasible, can be accomplished within the budget and timeframe available through the Phase II SBIR award with additional external funding. NEG believes that pCloud has the potential to become a game-changing technology for the modeling and analysis of electricity markets. This potential is due to the following critical advantages of pCloud over its competition: - Standardized access to advanced and proven power market simulators offered by third parties. - Automated parallelization of simulations and dynamic provisioning of computing resources on the cloud. This combination of automation and scalability dramatically reduces turn-around time while offering the capability to increase the number of analyzed scenarios by a factor of 10, 100 or even 1000. - Access to ready-to-use data and to cloud-based resources leading to a reduction in software, hardware, and IT costs. - Competitive pricing structure, which will make high-volume usage of simulation services affordable. - Availability and affordability of high quality power simulators, which presently only large corporate clients can afford, will level the playing field in developing regional energy policies, determining prudent cost recovery mechanisms and assuring just and reasonable rates to consumers. - Users that presently do not have the resources to internally maintain modeling capabilities will now be able to run simulations. This will invite more players into the industry, ultimately leading to more transparent and liquid power markets.« less
The application of time series models to cloud field morphology analysis
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.
1987-01-01
A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.
Clouds off the Aleutian Islands
2017-12-08
March 23, 2010 - Clouds off the Aleutian Islands Interesting cloud patterns were visible over the Aleutian Islands in this image, captured by the MODIS on the Aqua satellite on March 14, 2010. Turbulence, caused by the wind passing over the highest points of the islands, is producing the pronounced eddies that swirl the clouds into a pattern called a vortex "street". In this image, the clouds have also aligned in parallel rows or streets. Cloud streets form when low-level winds move between and over obstacles causing the clouds to line up into rows (much like streets) that match the direction of the winds. At the point where the clouds first form streets, they're very narrow and well-defined. But as they age, they lose their definition, and begin to spread out and rejoin each other into a larger cloud mass. The Aleutians are a chain of islands that extend from Alaska toward the Kamchatka Peninsula in Russia. For more information related to this image go to: modis.gsfc.nasa.gov/gallery/individual.php?db_date=2010-0... For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html
ERIC Educational Resources Information Center
Ramaswami, Rama; Raths, David; Schaffhauser, Dian; Skelly, Jennifer
2011-01-01
For many IT shops, the cloud offers an opportunity not only to improve operations but also to align themselves more closely with their schools' strategic goals. The cloud is not a plug-and-play proposition, however--it is a complex, evolving landscape that demands one's full attention. Security, privacy, contracts, and contingency planning are all…
A computer vision approach for solar radiation nowcasting using MSG images
NASA Astrophysics Data System (ADS)
Álvarez, L.; Castaño Moraga, C. A.; Martín, J.
2010-09-01
Cloud structures and haze are the two main atmospheric phenomena that reduce the performance of solar power plants, since they absorb solar energy reaching terrestrial surface. Thus, accurate forecasting of solar radiation is a challenging research area that involves both a precise localization of cloud structures and haze, as well as the attenuation introduced by these artifacts. Our work presents a novel approach for nowcasting services based on image processing techniques applied to MSG satellite images provided by the EUMETSAT Rapid Scan Service (RSS) service. These data are an interesting source of information for our purposes since every 5 minutes we obtain actual information of the atmospheric state in nearly real time. However, a workaround must be given in order to forecast solar radiation. To that end, we synthetically forecast MSG images forecasts from past images applying computer vision techniques adapted to fluid flows in order to evolve atmospheric state. First, we classify cloud structures on two different layers, corresponding to top and bottom clouds, which includes haze. This two-level classification responds to the dominant climate conditions found in our region of interest, the Canary Islands archipelago, regulated by the Gulf Stream and Trade Winds. Vertical structure of Trade Winds consists of two layers, the bottom one, which is fresh and humid, and the top one, which is warm and dry. Between these two layers a thermal inversion appears that does not allow bottom clouds to go up and naturally divides clouds in these two layers. Top clouds can be directly obtained from satellite images by means of a segmentation algorithm on histogram heights. However, bottom clouds are usually overlapped by the former, so an inpainting algorithm is used to recover overlapped areas of bottom clouds. For each layer, cloud motion is estimated through a correlation based optic flow algorithm that provides a vector field that describes the displacement field in each layer between two consecutive images in a sequence. Since RSS service from EUMETSAT provides images every 5 minutes (Δt), the cloud motion vector field between images at time t0 and (t0 - Δt) is quite similar to that between (t0 - Δt) and (t0 - 2Δt). Under this assumption, we infer the motion vector field for the next image in order to build a synthetic version of the image at time (t0 + Δt). The computation of this future motion vector field takes into account terrain orography in order to produce more realistic forecasts. In this sense, we are currently working on the integration of information from NWP outputs in order to introduce other atmospheric phenomena. Applying this algorithm several times we are able to produce short-term forecasts up to 6 hours with encouraging performance. To validate our results, we use both, comparison of synthetically generated images with the corresponding images at a given time, and direct solar radiation measurement with the set of meteorological stations located at several points of the canarian archipelago.
A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.
2012-04-01
Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking can benefit largely from the hardware parallelism provided by the cloud deployment. The resulting output, post-stack section, coherence, and NMO-velocity panels are used to generate a smooth migration-velocity model. Residual static corrections are calculated as a by-product of the stack and can be applied iteratively. As a final step, a time migrated subsurface image is obtained by a parallelized Kirchhoff time migration scheme. Processing can be done step-by-step or using a graphical workflow editor that can launch a series of pipelined tasks. The status of the submitted jobs is monitored by a dedicated service. All results are stored in project directories, where they can be downloaded of viewed directly in the browser. Currently, the portal has access to three research clusters having a total number of 70 nodes with 4 cores each. They are shared with four other cloud-computing applications bundled within the GRIDA3 project. To demonstrate the functionality of our "seismic cloud lab", we will present results obtained for three different types of data, all taken from hydrogeophysical studies: (1) a seismic reflection data set, made of compressional waves from explosive sources, recorded in Muravera, Sardinia; (2) a shear-wave data set from, Sardinia; (3) a multi-offset Ground-Penetrating-Radar data set from Larreule, France. The presented work was funded by the government of the Autonomous Region of Sardinia and by the Italian Ministry of Research and Education.
Venus in Violet and Near Infrared Light
1996-02-01
These images of the Venus clouds were taken by NASA Galileo Solid State Imaging System February 13,1990, at a range of about 1 million miles. The smallest detail visible is about 20 miles. They show the state of the clouds near the top of Venus cloud. http://photojournal.jpl.nasa.gov/catalog/PIA00071
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venner, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; O'Brien, Raymond
2015-01-01
Cloud computing capabilities have rapidly expanded within the private sector, offering new opportunities for meteorological applications. Collaborations between NASA Marshall, NASA Ames, and contractor partners led to evaluations of private (NASA) and public (Amazon) resources for executing short-term NWP systems. Activities helped the Marshall team further understand cloud capabilities, and benchmark use of cloud resources for NWP and other applications
Nowcasting Cloud Fields for U.S. Air Force Special Operations
2017-03-01
application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES
SPARCCS - Smartphone-Assisted Readiness, Command and Control System
2012-06-01
and database needs. By doing this SPARCCS takes advantage of all the capabilities cloud computing has to offer, especially that of disbursed data...40092829/ Microsoft. (2011). Cloud Computing . Retrieved September 24, 2011, http ://www.microsoft.com/industry/government/guides/cloud_computing/2...Command, and Control System) to address these issues. We use smartphones in conjunction with cloud computing to extend the benefits of collaborative
Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo
2014-01-01
Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.
Mars topographic clouds: MAVEN/IUVS observations and LMD MGCM predictions
NASA Astrophysics Data System (ADS)
Schneider, Nicholas M.; Connour, Kyle; Forget, Francois; Deighan, Justin; Jain, Sonal; Vals, Margaux; Wolff, Michael J.; Chaffin, Michael S.; Crismani, Matteo; Stewart, A. Ian F.; McClintock, William E.; Holsclaw, Greg; Lefevre, Franck; Montmessin, Franck; Stiepen, Arnaud; Stevens, Michael H.; Evans, J. Scott; Yelle, Roger; Lo, Daniel; Clarke, John T.; Jakosky, Bruce
2017-10-01
The Imaging Ultraviolet Spectrograph (IUVS) instrument on the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft takes mid-UV spectral images of the Martian atmosphere. From these apoapse disk images, information about clouds and aerosols can be retrieved and comprise the only MAVEN observations of topographic clouds and cloud morphologies. Measuring local time variability of large-scale recurring cloud features is made possible with MAVEN’s ~4.5-hour elliptical orbit, something not possible with sun-synchronous orbits. We have run the LMD MGCM (Mars global circulation model) at 1° x 1° resolution to simulate water ice cloud formation with inputs consistent with observing parameters and Mars seasons. Topographic clouds are observed to form daily during the late mornings of northern hemisphere spring and this phenomenon recurs until late summer (Ls = 160°), after which topographic clouds wane in thickness. By northern fall, most topographic clouds cease to form except over Arsia Mons and Pavonis Mons, where clouds can still be observed. Our data show moderate cloud formation over these regions as late as Ls = 220°, something difficult for the model to replicate. Previous studies have shown that models have trouble simulating equatorial cloud thickness in combination with a realistic amount of water vapor and not-too-thick polar water ice clouds, implying aspects of the water cycle are not fully understood. We present data/model comparisons as well as further refinements on parameter inputs based on IUVS observations.
Mars topographic clouds: MAVEN/IUVS observations and LMD MGCM predictions
NASA Astrophysics Data System (ADS)
Connour, K.; Schneider, N.; Forget, F.; Deighan, J.; Jain, S.; Pottier, A.; Wolff, M. J.; Chaffin, M.; Crismani, M. M. J.; Stewart, I. F.; McClintock, B.; Holsclaw, G.; Lefèvre, F.; Montmessin, F.; Stiepen, A.; Stevens, M. H.; Evans, J. S.; Yelle, R. V.; Lo, D.; Clarke, J. T.; Jakosky, B. M.
2017-12-01
The Imaging Ultraviolet Spectrograph (IUVS) instrument on the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft takes mid-UV spectral images of the Martian atmosphere. From these apoapse disk images, information about clouds and aerosols can be retrieved and comprise the only MAVEN observations of topographic clouds and cloud morphologies. Measuring local time variability of large-scale recurring cloud features is made possible with MAVEN's 4.5-hour elliptical orbit, something not possible with sun-synchronous orbits. We have run the LMD MGCM (Mars global circulation model) at 1° x 1° resolution to simulate water ice cloud formation with inputs consistent with observing parameters and Mars seasons. Topographic clouds are observed to form daily during the late mornings of northern hemisphere spring and this phenomenon recurs until late summer (Ls = 160°), after which topographic clouds wane in thickness. By northern fall, most topographic clouds cease to form except over Arsia Mons and Pavonis Mons, where clouds can still be observed. Our data show moderate cloud formation over these regions as late as Ls = 220°, something difficult for the model to replicate. Previous studies have shown that models have trouble simulating equatorial cloud thickness in combination with a realistic amount of water vapor and not-too-thick polar water ice clouds, implying aspects of the water cycle are not fully understood. We present data/model comparisons as well as further refinements on parameter inputs based on IUVS observations.
Detection of long duration cloud contamination in hyper-temporal NDVI imagery
NASA Astrophysics Data System (ADS)
Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.
2012-04-01
NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.
NASA Technical Reports Server (NTRS)
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.
Region-Based Prediction for Image Compression in the Cloud.
Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine
2018-04-01
Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.
Monte Carlo Radiative Transfer Modeling of Lightning Observed in Galileo Images of Jupiter
NASA Technical Reports Server (NTRS)
Dyudine, U. A.; Ingersoll, Andrew P.
2002-01-01
We study lightning on Jupiter and the clouds illuminated by the lightning using images taken by the Galileo orbiter. The Galileo images have a resolution of 25 km/pixel and axe able to resolve the shape of the single lightning spots in the images, which have full widths at half the maximum intensity in the range of 90-160 km. We compare the measured lightning flash images with simulated images produced by our ED Monte Carlo light-scattering model. The model calculates Monte Carlo scattering of photons in a ED opacity distribution. During each scattering event, light is partially absorbed. The new direction of the photon after scattering is chosen according to a Henyey-Greenstein phase function. An image from each direction is produced by accumulating photons emerging from the cloud in a small range (bins) of emission angles. Lightning bolts are modeled either as points or vertical lines. Our results suggest that some of the observed scattering patterns axe produced in a 3-D cloud rather than in a plane-parallel cloud layer. Lightning is estimated to occur at least as deep as the bottom of the expected water cloud. For the six cases studied, we find that the clouds above the lightning are optically thick (tau > 5). Jovian flashes are more regular and circular than the largest terrestrial flashes observed from space. On Jupiter there is nothing equivalent to the 30-40-km horizontal flashes which axe seen on Earth.
Jovian Lightning and Moonlit Clouds
NASA Technical Reports Server (NTRS)
1997-01-01
Jovian lightning and moonlit clouds. These two images, taken 75 minutes apart, show lightning storms on the night side of Jupiter along with clouds dimly lit by moonlight from Io, Jupiter's closest moon. The images were taken in visible light and are displayed in shades of red. The images used an exposure time of about one minute, and were taken when the spacecraft was on the opposite side of Jupiter from the Earth and Sun. Bright storms are present at two latitudes in the left image, and at three latitudes in the right image. Each storm was made visible by multiple lightning strikes during the exposure. Other Galileo images were deliberately scanned from east to west in order to separate individual flashes. The images show that Jovian and terrestrial lightning storms have similar flash rates, but that Jovian lightning strikes are a few orders of magnitude brighter in visible light.
The moonlight from Io allows the lightning storms to be correlated with visible cloud features. The latitude bands where the storms are seen seem to coincide with the 'disturbed regions' in daylight images, where short-lived chaotic motions push clouds to high altitudes, much like thunderstorms on Earth. The storms in these images are roughly one to two thousand kilometers across, while individual flashes appear hundreds of kilometer across. The lightning probably originates from the deep water cloud layer and illuminates a large region of the visible ammonia cloud layer from 100 kilometers below it.There are several small light and dark patches that are artifacts of data compression. North is at the top of the picture. The images span approximately 50 degrees in latitude and longitude. The lower edges of the images are aligned with the equator. The images were taken on October 5th and 6th, 1997 at a range of 6.6 million kilometers by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.The identification of cloud types in LANDSAT MSS images. [Great Britain
NASA Technical Reports Server (NTRS)
Barrett, E. C. (Principal Investigator); Grant, C. K.
1976-01-01
The author has identified the following significant results. Five general families of clouds were identified: cumulonimbiform, cumuliform, stratiform, stratocumuliform, and cirriform. Four members of this five-fold primary division of clouds were further divided into a number of subgroups. The MSS observed and recorded earth radiation in four different wavebands. Two of these bands (4 and 5) image in the visible portion of the electromagnetic spectrum, while the others (6 and 7) image the short wave portion, or just into the infrared. The main differences between the appearances of clouds in the four wavebands are related to the background brightness of land and sea surfaces.
Cuenca-Alba, Jesús; Del Cano, Laura; Gómez Blanco, Josué; de la Rosa Trevín, José Miguel; Conesa Mingo, Pablo; Marabini, Roberto; S Sorzano, Carlos Oscar; Carazo, Jose María
2017-10-01
New instrumentation for cryo electron microscopy (cryoEM) has significantly increased data collection rate as well as data quality, creating bottlenecks at the image processing level. Current image processing model of moving the acquired images from the data source (electron microscope) to desktops or local clusters for processing is encountering many practical limitations. However, computing may also take place in distributed and decentralized environments. In this way, cloud is a new form of accessing computing and storage resources on demand. Here, we evaluate on how this new computational paradigm can be effectively used by extending our current integrative framework for image processing, creating ScipionCloud. This new development has resulted in a full installation of Scipion both in public and private clouds, accessible as public "images", with all the required preinstalled cryoEM software, just requiring a Web browser to access all Graphical User Interfaces. We have profiled the performance of different configurations on Amazon Web Services and the European Federated Cloud, always on architectures incorporating GPU's, and compared them with a local facility. We have also analyzed the economical convenience of different scenarios, so cryoEM scientists have a clearer picture of the setup that is best suited for their needs and budgets. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
1989-08-21
This picture of Neptune was produced from images taken through the ultraviolet, violet and green filters of the Voyager 2 wide-angle camera. This 'false' color image has been made to show clearly details of the cloud structure and to paint clouds located at different altitudes with different colors. Dark, deeplying clouds tend to be masked in the ultraviolet wavelength since overlying air molecules are particularly effective in scattering sunlight there which brightens the sky above them. Such areas appear dark blue in this photo. The Great Dark Spot (GDS) and the high southern latitudes have a deep bluish cast in this image, indication they are regions where visible light (but not ultraviolet light) may penetrate to a deeper layer of dark cloud or haze in Neptune's atmosphere. Conversely, the pinkish clouds may be positioned at high altitudes.
2010-09-14
Clouds are common near the north polar caps throughout the spring and summer. The clouds typically cause a haze over the extensive dune fields. This image from NASA Mars Odyssey shows the edge of the cloud front.
NASA Astrophysics Data System (ADS)
Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.
2017-10-01
Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.
Jupiter's Northern Hemisphere in False Color (Time Set 2)
NASA Technical Reports Server (NTRS)
1997-01-01
Mosaic of Jupiter's northern hemisphere between 10 and 50 degrees latitude. Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the color and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including large white ovals, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two interacting vortices in the upper half of the mosaic is about 3500 kilometers.
This mosaic uses the Galileo imaging camera's three near-infrared wavelengths (756 nanometers, 727 nanometers, and 889 nanometers displayed in red, green, and blue) to show variations in cloud height and thickness. Light blue clouds are high and thin, reddish clouds are deep, and white clouds are high and thick. The clouds and haze over the ovals are high, extending into Jupiter's stratosphere. Dark purple most likely represents a high haze overlying a clear deep atmosphere. Galileo is the first spacecraft to distinguish cloud layers on Jupiter.North is at the top. The images are projected on a sphere, with features being foreshortened towards the north. The smallest resolved features are tens of kilometers in size. These images were taken on April 3, 1997, at a range of 1.4 million kilometers by the Solid State Imaging system (CCD) on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoCloud Detection by Fusing Multi-Scale Convolutional Features
NASA Astrophysics Data System (ADS)
Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang
2018-04-01
Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
Winter sky brightness and cloud cover at Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Moore, Anna M.; Yang, Yi; Fu, Jianning; Ashley, Michael C. B.; Cui, Xiangqun; Feng, Long Long; Gong, Xuefei; Hu, Zhongwen; Lawrence, Jon S.; Luong-Van, Daniel M.; Riddle, Reed; Shang, Zhaohui; Sims, Geoff; Storey, John W. V.; Tothill, Nicholas F. H.; Travouillon, Tony; Wang, Lifan; Yang, Huigen; Yang, Ji; Zhou, Xu; Zhu, Zhenxi
2013-01-01
At the summit of the Antarctic plateau, Dome A offers an intriguing location for future large scale optical astronomical observatories. The Gattini Dome A project was created to measure the optical sky brightness and large area cloud cover of the winter-time sky above this high altitude Antarctic site. The wide field camera and multi-filter system was installed on the PLATO instrument module as part of the Chinese-led traverse to Dome A in January 2008. This automated wide field camera consists of an Apogee U4000 interline CCD coupled to a Nikon fisheye lens enclosed in a heated container with glass window. The system contains a filter mechanism providing a suite of standard astronomical photometric filters (Bessell B, V, R) and a long-pass red filter for the detection and monitoring of airglow emission. The system operated continuously throughout the 2009, and 2011 winter seasons and part-way through the 2010 season, recording long exposure images sequentially for each filter. We have in hand one complete winter-time dataset (2009) returned via a manned traverse. We present here the first measurements of sky brightness in the photometric V band, cloud cover statistics measured so far and an estimate of the extinction.
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction
Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija
2017-01-01
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.
Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija
2017-12-02
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.
Cloud Size Distributions from Multi-sensor Observations of Shallow Cumulus Clouds
NASA Astrophysics Data System (ADS)
Kleiss, J.; Riley, E.; Kassianov, E.; Long, C. N.; Riihimaki, L.; Berg, L. K.
2017-12-01
Combined radar-lidar observations have been used for almost two decades to document temporal changes of shallow cumulus clouds at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Facility's Southern Great Plains (SGP) site in Oklahoma, USA. Since the ARM zenith-pointed radars and lidars have a narrow field-of-view (FOV), the documented cloud statistics, such as distributions of cloud chord length (or horizontal length scale), represent only a slice along the wind direction of a region surrounding the SGP site, and thus may not be representative for this region. To investigate this impact, we compare cloud statistics obtained from wide-FOV sky images collected by ground-based observations at the SGP site to those from the narrow FOV active sensors. The main wide-FOV cloud statistics considered are cloud area distributions of shallow cumulus clouds, which are frequently required to evaluate model performance, such as routine large eddy simulation (LES) currently being conducted by the ARM LASSO (LES ARM Symbiotic Simulation and Observation) project. We obtain complementary macrophysical properties of shallow cumulus clouds, such as cloud chord length, base height and thickness, from the combined radar-lidar observations. To better understand the broader observational context where these narrow FOV cloud statistics occur, we compare them to collocated and coincident cloud area distributions from wide-FOV sky images and high-resolution satellite images. We discuss the comparison results and illustrate the possibility to generate a long-term climatology of cloud size distributions from multi-sensor observations at the SGP site.
Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny
2017-01-01
The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
All-sky photogrammetry techniques to georeference a cloud field
NASA Astrophysics Data System (ADS)
Crispel, Pierre; Roberts, Gregory
2018-01-01
In this study, we present a novel method of identifying and geolocalizing cloud field elements from a portable all-sky camera stereo network based on the ground and oriented towards zenith. The methodology is mainly based on stereophotogrammetry which is a 3-D reconstruction technique based on triangulation from corresponding stereo pixels in rectified images. In cases where clouds are horizontally separated, identifying individual positions is performed with segmentation techniques based on hue filtering and contour detection algorithms. Macroscopic cloud field characteristics such as cloud layer base heights and velocity fields are also deduced. In addition, the methodology is fitted to the context of measurement campaigns which impose simplicity of implementation, auto-calibration, and portability. Camera internal geometry models are achieved a priori in the laboratory and validated to ensure a certain accuracy in the peripheral parts of the all-sky image. Then, stereophotogrammetry with dense 3-D reconstruction is applied with cameras spaced 150 m apart for two validation cases. The first validation case is carried out with cumulus clouds having a cloud base height at 1500 m a.g.l. The second validation case is carried out with two cloud layers: a cumulus fractus layer with a base height at 1000 m a.g.l. and an altocumulus stratiformis layer with a base height of 2300 m a.g.l. Velocity fields at cloud base are computed by tracking image rectangular patterns through successive shots. The height uncertainty is estimated by comparison with a Vaisala CL31 ceilometer located on the site. The uncertainty on the horizontal coordinates and on the velocity field are theoretically quantified by using the experimental uncertainties of the cloud base height and camera orientation. In the first cumulus case, segmentation of the image is performed to identify individuals clouds in the cloud field and determine the horizontal positions of the cloud centers.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Bal, G.; Chen, J.
2015-12-01
Operational remote sensing of microphysical and optical cloud properties is invariably predicated on the assumption of plane-parallel slab geometry for the targeted cloud. The sole benefit of this often-questionable assumption about the cloud is that it leads to one-dimensional (1D) radiative transfer (RT)---a textbook, computationally tractable model. We present new results as evidence that, thanks to converging advances in 3D RT, inverse problem theory, algorithm implementation, and computer hardware, we are at the dawn of a new era in cloud remote sensing where we can finally go beyond the plane-parallel paradigm. Granted, the plane-parallel/1D RT assumption is reasonable for spatially extended stratiform cloud layers, as well as the smoothly distributed background aerosol layers. However, these 1D RT-friendly scenarios exclude cases that are critically important for climate physics. 1D RT---whence operational cloud remote sensing---fails catastrophically for cumuliform clouds that have fully 3D outer shapes and internal structures driven by shallow or deep convection. For these situations, the first order of business in a robust characterization by remote sensing is to abandon the slab geometry framework and determine the 3D geometry of the cloud, as a first step toward bone fide 3D cloud tomography. With this specific goal in mind, we deliver a proof-of-concept for an entirely new kind of remote sensing applicable to 3D clouds. It is based on highly simplified 3D RT and exploits multi-angular suites of cloud images at high spatial resolution. Airborne sensors like AirMSPI readily acquire such data. The key element of the reconstruction algorithm is a sophisticated solution of the nonlinear inverse problem via linearization of the forward model and an iteration scheme supported, where necessary, by adaptive regularization. Currently, the demo uses a 2D setting to show how either vertical profiles or horizontal slices of the cloud can be accurately reconstructed. Extension to 3D volumes is straightforward but the next challenge is to accommodate images at lower spatial resolution, e.g., from MISR/Terra. G. Bal, J. Chen, and A.B. Davis (2015). Reconstruction of cloud geometry from multi-angle images, Inverse Problems in Imaging (submitted).
Neural network cloud top pressure and height for MODIS
NASA Astrophysics Data System (ADS)
Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara
2018-06-01
Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE.
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.
HUBBLE SPOTS NORTHERN HEMISPHERIC CLOUDS ON URANUS
NASA Technical Reports Server (NTRS)
2002-01-01
Using visible light, astronomers for the first time this century have detected clouds in the northern hemisphere of Uranus. The newest images, taken July 31 and Aug. 1, 1997 with NASA Hubble Space Telescope's Wide Field and Planetary Camera 2, show banded structure and multiple clouds. Using these images, Dr. Heidi Hammel (Massachusetts Institute of Technology) and colleagues Wes Lockwood (Lowell Observatory) and Kathy Rages (NASA Ames Research Center) plan to measure the wind speeds in the northern hemisphere for the first time. Uranus is sometimes called the 'sideways' planet, because its rotation axis is tipped more than 90 degrees from the planet's orbit around the Sun. The 'year' on Uranus lasts 84 Earth years, which creates extremely long seasons - winter in the northern hemisphere has lasted for nearly 20 years. Uranus has also been called bland and boring, because no clouds have been detectable in ground-based images of the planet. Even to the cameras of the Voyager spacecraft in 1986, Uranus presented a nearly uniform blank disk, and discrete clouds were detectable only in the southern hemisphere. Voyager flew over the planet's cloud tops near the dead of northern winter (when the northern hemisphere was completely shrouded in darkness). Spring has finally come to the northern hemisphere of Uranus. The newest images, both the visible-wavelength ones described here and those taken a few days earlier with the Near Infrared and Multi-Object Spectrometer (NICMOS) by Erich Karkoschka (University of Arizona), show a planet with banded structure and detectable clouds. Two images are shown here. The 'aqua' image (on the left) is taken at 5,470 Angstroms, which is near the human eye's peak response to wavelength. Color has been added to the image to show what a person on a spacecraft near Uranus might see. Little structure is evident at this wavelength, though with image-processing techniques, a small cloud can be seen near the planet's northern limb (rightmost edge). The 'red' image (on the right) is taken at 6,190 Angstroms, and is sensitive to absorption by methane molecules in the planet's atmosphere. The banded structure of Uranus is evident, and the small cloud near the northern limb is now visible. Scientists are expecting that the discrete clouds and banded structure may become even more pronounced as Uranus continues in its slow pace around the Sun. 'Some parts of Uranus haven't seen the Sun in decades,' says Dr. Hammel, 'and historical records suggest that we may see the development of more banded structure and patchy clouds as the planet's year progresses.' Some scientists have speculated that the winds of Uranus are not symmetric around the planet's equator, but no clouds were visible to test those theories. The new data will provide the opportunity to measure the northern winds. Hammel and colleagues expect to have results soon. Credits: Heidi Hammel (Massachusetts Institute of Technology), and NASA.
2015-04-22
This simulated image shows how a cloud of glitter in geostationary orbit would be illuminated and controlled by two laser beams. As the cloud orbits Earth, grains scatter the sun's light at different angles like many tiny prisms, similar to how rainbows are produced from light being dispersed by water droplets. That is why the project concept is called "Orbiting Rainbows." The cloud functions like a reflective surface, allowing the exoplanet (displayed in the bottom right) to be imaged. The orbit path is shown in the top right. On the bottom left, Earth's image is seen behind the cloud. To image an exoplanet, the cloud would need to have a diameter of nearly 98 feet (30 meters). This simulation confines the cloud to a 3.3 x 3.3 x 3.3 foot volume (1 x 1 x 1 meter volume) to simplify the computations. The elements of the orbiting telescope are not to scale. Orbiting Rainbows is currently in Phase II development through the NASA Innovative Advanced Concepts (NIAC) Program. It was one of five technology proposals chosen for continued study in 2014. In the current phase, Orbiting Rainbows researchers are conducting small-scale ground experiments to demonstrate how granular materials can be manipulated using lasers and simulations of how the imaging system would behave in orbit. http://photojournal.jpl.nasa.gov/catalog/PIA19318
Foolad, Negar; Ornelas, Jennifer N; Clark, Ashley K; Ali, Ifrah; Sharon, Victoria R; Al Mubarak, Luluah; Lopez, Andrés; Alikhan, Ali; Al Dabagh, Bishr; Firooz, Alireza; Awasthi, Smita; Liu, Yu; Li, Chin-Shang; Sivamani, Raja K
2017-09-01
Cloud-based image sharing technology allows facilitated sharing of images. Cloud-based image sharing technology has not been well-studied for acne assessments or treatment preferences, among international evaluators. We evaluated inter-rater variability of acne grading and treatment recommendations among an international group of dermatologists that assessed photographs. This is a prospective, single visit photographic study to assess inter-rater agreement of acne photographs shared through an integrated mobile device, cloud-based, and HIPAA-compliant platform. Inter-rater agreements for global acne assessment and acne lesion counts were evaluated by the Kendall's coefficient of concordance while correlations between treatment recommendations and acne severity were calculated by Spearman's rank correlation coefficient. There was good agreement for the evaluation of inflammatory lesions (KCC = 0.62, P < 0.0001), noninflammatory lesions (KCC = 0.62, P < 0.0001), and the global acne grading system score (KCC = 0.69, P < 0.0001). Topical retinoid, oral antibiotic, and isotretinoin treatment preferences correlated with photographic based acne severity. Our study supports the use of mobile phone based photography and cloud-based image sharing for acne assessment. Cloud-based sharing may facilitate acne care and research among international collaborators. © 2017 The International Society of Dermatology.
Lightweight Electronic Camera for Research on Clouds
NASA Technical Reports Server (NTRS)
Lawson, Paul
2006-01-01
"Micro-CPI" (wherein "CPI" signifies "cloud-particle imager") is the name of a small, lightweight electronic camera that has been proposed for use in research on clouds. It would acquire and digitize high-resolution (3- m-pixel) images of ice particles and water drops at a rate up to 1,000 particles (and/or drops) per second.
Tropical Storm Ernesto over Cuba
2006-08-28
This infrared image shows Tropical Storm Ernesto over Cuba, from the Atmospheric Infrared Sounder AIRS on NASA Aqua satellite in August, 2006. Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red). http://photojournal.jpl.nasa.gov/catalog/PIA00510
Typhoon Ioke in the Western Pacific
2006-08-29
This infrared image shows Typhoon Ioke in the Western Pacific, from the Atmospheric Infrared Sounder AIRS on NASA Aqua satellite in August, 2006. Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red). http://photojournal.jpl.nasa.gov/catalog/PIA00511
Hurricane Ileana in the Eastern Pacific
2006-08-22
This is an infrared image of Hurricane Ileana in the Eastern Pacific, from the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite on August 22, 2006. This AIRS image shows the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. The infrared signal does not penetrate through clouds. Where there are no clouds the AIRS instrument reads the infrared signal from the surface of the Earth, revealing warmer temperatures (red). http://photojournal.jpl.nasa.gov/catalog/PIA00509
Hole punch clouds over the Bahamas
2017-12-08
In elementary school, students learn that water freezes at 0 degrees Celsius (32 degrees Fahrenheit). That is true most of the time, but there are exceptions to the rule. For instance, water with very few impurities (such as dust or pollution particles, fungal spores, bacteria) can be chilled to much cooler temperatures and still remain liquid—a process known as supercooling. Supercooling may sound exotic, but it occurs pretty routinely in Earth’s atmosphere. Altocumulus clouds, a common type of mid-altitude cloud, are mostly composed of water droplets supercooled to a temperature of about -15 degrees C. Altocumulus clouds with supercooled tops cover about 8 percent of Earth’s surface at any given time. Supercooled water droplets play a key role in the formation of hole-punch and canal clouds, the distinctive clouds shown in these satellite images. Hole-punch clouds usually appear as circular gaps in decks of altocumulus clouds; canal clouds look similar but the gaps are longer and thinner. This true-color image shows hole-punch and canal clouds off the coast of Florida, as observed on December 12, 2014, by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. Both types of cloud form when aircraft fly through cloud decks rich with supercooled water droplets and produce aerodynamic contrails. Air expands and cools as it moves around the wings and past the propeller, a process known as adiabatic cooling. Air temperatures over jet wings often cool by as much as 20 degrees Celsius, pushing supercooled water droplets to the point of freezing. As ice crystals form, they absorb nearby water droplets. Since ice crystals are relatively heavy, they tend to sink. This triggers tiny bursts of snow or rain that leave gaps in the cloud cover. Whether a cloud formation becomes a hole-punch or canal depends on the thickness of the cloud layer, the air temperature, and the degree of horizontal wind shear. Both descending and ascending aircraft—including jets and propeller planes—can trigger hole-punch and canal clouds. The nearest major airports in the images above include Miami International, Fort Lauderdale International, Grand Bahama International, and Palm Beach International. Credit: NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Measurement and reconstruction of the leaflet geometry for a pericardial artificial heart valve.
Jiang, Hongjun; Campbell, Gord; Xi, Fengfeng
2005-03-01
This paper describes the measurement and reconstruction of the leaflet geometry for a pericardial heart valve. Tasks involved include mapping the leaflet geometries by laser digitizing and reconstructing the 3D freeform leaflet surface based on a laser scanned profile. The challenge is to design a prosthetic valve that maximizes the benefits offered to the recipient as compared to the normally operating naturally-occurring valve. This research was prompted by the fact that artificial heart valve bioprostheses do not provide long life durability comparable to the natural heart valve, together with the anticipated benefits associated with defining the valve geometries, especially the leaflet geometries for the bioprosthetic and human valves, in order to create a replicate valve fabricated from synthetic materials. Our method applies the concept of reverse engineering in order to reconstruct the freeform surface geometry. A Brown & Shape coordinate measuring machine (CMM) equipped with a HyMARC laser-digitizing system was used to measure the leaflet profiles of a Baxter Carpentier-Edwards pericardial heart valve. The computer software, Polyworks was used to pre-process the raw data obtained from the scanning, which included merging images, eliminating duplicate points, and adding interpolated points. Three methods, creating a mesh model from cloud points, creating a freeform surface from cloud points, and generating a freeform surface by B-splines are presented in this paper to reconstruct the freeform leaflet surface. The mesh model created using Polyworks can be used for rapid prototyping and visualization. To fit a freeform surface to cloud points is straightforward but the rendering of a smooth surface is usually unpredictable. A surface fitted by a group of B-splines fitted to cloud points was found to be much smoother. This method offers the possibility of manually adjusting the surface curvature, locally. However, the process is complex and requires additional manipulation. Finally, this paper presents a reverse engineered design for the pericardial heart valve which contains three identical leaflets with reconstructed geometry.
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
2018-04-06
See intricate cloud patterns in the northern hemisphere of Jupiter in this new view taken by NASA's Juno spacecraft. The color-enhanced image was taken on April 1, 2018 at 2:32 a.m. PST (5:32 a.m. EST), as Juno performed its twelfth close flyby of Jupiter. At the time the image was taken, the spacecraft was about 7,659 miles (12,326 kilometers) from the tops of the clouds of the planet at a northern latitude of 50.2 degrees. Citizen scientist Kevin M. Gill processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21984
Jupiter's Swirling Cloud Formations
2018-02-15
See swirling cloud formations in the northern area of Jupiter's north temperate belt in this new view taken by NASA's Juno spacecraft. The color-enhanced image was taken on Feb. 7 at 5:42 a.m. PST (8:42 a.m. EST), as Juno performed its eleventh close flyby of Jupiter. At the time the image was taken, the spacecraft was about 5,086 miles (8,186 kilometers) from the tops of the clouds of the planet at a latitude of 39.9 degrees. Citizen scientist Kevin M. Gill processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21978
Core Facility of the Juelich Observatory for Cloud Evolution (JOYCE - CF)
NASA Astrophysics Data System (ADS)
Beer, J.; Troemel, S.
2017-12-01
A multiple and holistic multi-sensor monitoring of clouds and precipitation processes is a challenging but promising task in the meteorological community. Instrument synergies offer detailed views in microphysical and dynamical developments of clouds. Since 2017 The the Juelich Observatory for Cloud Evolution (JOYCE) is transformed into a Core Facility (JOYCE - CF). JOYCE - CF offers multiple long-term remote sensing observations of the atmosphere, develops an easy access to all observations and invites scientists word wide to exploit the existing data base for their research but also to complement JOYCE-CF with additional long-term or campaign instrumentation. The major instrumentation contains a twin set of two polarimetric X-band radars, a microwave profiler, two cloud radars, an infrared spectrometer, a Doppler lidar and two ceilometers. JOYCE - CF offers easy and open access to database and high quality calibrated observations of all instruments. E.g. the two polarimetric X-band radars which are located in 50 km distance are calibrated using the self-consistency method, frequently repeated vertical pointing measurements as well as instrument synergy with co-located micro-rain radar and distrometer measurements. The presentation gives insights into calibration procedures, the standardized operation procedures and recent synergistic research exploiting our radars operating at three different frequencies.
Vorstenbosch, Joshua; Islur, Avi
2017-06-01
Breast augmentation is among the most frequently performed cosmetic plastic surgeries. Providing patients with "realistic" 3D simulations of breast augmentation outcomes is becoming increasingly common. Until recently, such programs were costly and required significant equipment, training, and office space. New simple user-friendly cloud-based programs have been developed, but to date there remains a paucity of objective evidence comparing these 3D simulations with the post-operative outcomes. To determine the aesthetic similarity between pre-operative 3D simulation generated by Crisalix and real post-operative outcomes. A retrospective review of 20 patients receiving bilateral breast augmentation was conducted comparing 6-month post-operative outcomes with 3D simulation using Crisalix software. Similarities between post-operative and simulated images were measured by three attending plastic surgeons and ten plastic surgery residents using a series of parameters. Assessment reveals similarity between the 3D simulation and 6-month post-operative images for overall appearance, breast height, breast width, breast volume, breast projection, and nipple correction. Crisalix software generated more representative simulations for symmetric breasts than for tuberous or ptotic breasts. Comparison of overall aesthetic outcome to simulation showed that the post-operative outcome was more appealing for the symmetric and tuberous breasts and less appealing for the ptotic breasts. Our data suggest that Crisalix offers a good overall 3D simulated image of post-operative breast augmentation outcomes. Improvements to the simulation of the post-operative outcomes for ptotic and tuberous breasts would result in greater predictive capabilities of Crisalix. Collectively, Crisalix offers good predictive simulations for symmetric breasts. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
NASA Astrophysics Data System (ADS)
Liu, Y.; McDonough MacKenzie, C.; Primack, R.; Zhang, X.; Schaaf, C.; Sun, Q.; Wang, Z.
2015-12-01
Monitoring phenology with remotely sensed data has become standard practice in large-plot agriculture but remains an area of research in complex terrain. Landsat data (30m) provides a more appropriate spatial resolution to describe such regions but may only capture a few cloud-free images over a growing period. Daily data from the MODerate resolution Imaging Spectroradiometer(MODIS) and Visible Infrared Imaging Radiometer Suite(VIIRS) offer better temporal acquisitions but at coarse spatial resolutions of 250m to 1km. Thus fused data sets are being employed to provide the temporal and spatial resolutions necessary to accurately monitor vegetation phenology. This study focused on Acadia National Park, Maine, attempts to compare green-up from remote sensing and ground observations over varying topography. Three north-south field transects were established in 2013 on parallel mountains. Along these transects, researchers record the leaf out and flowering phenology for thirty plant species biweekly. These in situ spring phenological observations are compared with the dates detected by Landsat 7, Landsat 8, MODIS, and VIIRS observations, both separately and as fused data, to explore the ability of remotely sensed data to capture the subtle variations due to elevation. Daily Nadir BRDF Adjusted Reflectances(NBAR) from MODIS and VIIRS are fused with Landsat imagery to simulate 30m daily data via the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model(ESTARFM) algorithm. Piecewise logistic functions are fit to the time series to establish spring leaf-out dates. Acadia National Park, a region frequently affected by coastal clouds, is a particularly useful study area as it falls in a Landsat overlap region and thus offers the possibility of acquiring as many as 4 Landsat observations in a 16 day period. With the recent launch of Sentinel 2A, the community will have routine access to such high spatial and temporal data for phenological monitoring.
E4 True and False Color Hot Spot Mosaic
1998-03-06
True and false color views of Jupiter from NASA's Galileo spacecraft show an equatorial "hotspot" on Jupiter. These images cover an area 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles). The top mosaic combines the violet and near infrared continuum filter images to create an image similar to how Jupiter would appear to human eyes. Differences in coloration are due to the composition and abundances of trace chemicals in Jupiter's atmosphere. The bottom mosaic uses Galileo's three near-infrared wavelengths displayed in red, green, and blue) to show variations in cloud height and thickness. Bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the deep cloud with an overlying thin haze. The light blue region to the left is covered by a very high haze layer. The multicolored region to the right has overlapping cloud layers of different heights. Galileo is the first spacecraft to distinguish cloud layers on Jupiter. North is at the top. The mosaic covers latitudes 1 to 10 degrees and is centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging camera system aboard Galileo. http://photojournal.jpl.nasa.gov/catalog/PIA00602
NASA Astrophysics Data System (ADS)
Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.
2015-10-01
Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Hurricane Hector in the Eastern Pacific
2006-08-17
Infrared, microwave, and visible/near-infrared images of Hurricane Hector in the eastern Pacific were created with data from the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite on August 17, 2006. The infrared AIRS image shows the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the hurricane. The infrared signal does not penetrate through clouds. Where there are no clouds the AIRS instrument reads the infrared signal from the surface of the Earth, revealing warmer temperatures (red). At the time the data were taken from which these images were made, Hector is a well organized storm, with the strongest convection in the SE quadrant. The increasing vertical wind shear in the NW quadrant is appearing to have an effect. Maximum sustained winds are at 85 kt, gusts to 105 kt. Estimated minimum central pressure is 975 mbar. The microwave image is created from microwave radiation emitted by Earth's atmosphere and received by the instrument. It shows where the heaviest rainfall is taking place (in blue) in the storm. Blue areas outside of the storm where there are either some clouds or no clouds, indicate where the sea surface shines through. The "visible" image is created from data acquired by the visible light/near-infrared sensor on the AIRS instrument. http://photojournal.jpl.nasa.gov/catalog/PIA00507
Featured Image: A Molecular Cloud Outside Our Galaxy
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2018-06-01
What do molecular clouds look like outside of our own galaxy? See for yourself in the images above and below of N55, a molecular cloud located in the Large Magellanic Cloud (LMC). In a recent study led by Naslim Neelamkodan (Academia Sinica Institute of Astronomy and Astrophysics, Taiwan), a team of scientists explore N55 to determine how its cloud properties differ from clouds within the Milky Way. The image above reveals the distribution of infrared-emitting gas and dust observed in three bands by the Spitzer Space Telescope. Overplotted in cyan are observations from the Atacama Submillimeter Telescope Experiment tracing the clumpy, warm molecular gas. Below, new observations from the Atacama Large Millimeter/submillimeter Array (ALMA) reveal the sub-parsec-scale molecular clumps in greater detail, showing the correlation of massive clumps with Spitzer-identified young stellar objects (crosses). The study presented here indicates that this cloud in the LMC is the site of massive star formation, with properties similar to equivalent clouds in the Milky Way. To learn more about the authors findings, check out the article linked below.CitationNaslim N. et al 2018 ApJ 853 175. doi:10.3847/1538-4357/aaa5b0
Verifying Air Force Weather Passive Satellite Derived Cloud Analysis Products
NASA Astrophysics Data System (ADS)
Nobis, T. E.
2017-12-01
Air Force Weather (AFW) has developed an hourly World-Wide Merged Cloud Analysis (WWMCA) using imager data from 16 geostationary and polar-orbiting satellites. The analysis product contains information on cloud fraction, height, type and various optical properties including optical depth and integrated water path. All of these products are derived using a suite of algorithms which rely exclusively on passively sensed data from short, mid and long wave imager data. The system integrates satellites with a wide-range of capabilities, from the relatively simple two-channel OLS imager to the 16 channel ABI/AHI to create a seamless global analysis in real time. Over the last couple of years, AFW has started utilizing independent verification data from active sensed cloud measurements to better understand the performance limitations of the WWMCA. Sources utilized include space based lidars (CALIPSO, CATS) and radar (CloudSat) as well as ground based lidars from the Department of Energy ARM sites and several European cloud radars. This work will present findings from our efforts to compare active and passive sensed cloud information including comparison techniques/limitations as well as performance of the passive derived cloud information against the active.
A Jovian Hotspot in True and False Colors (Time set 1)
NASA Technical Reports Server (NTRS)
1997-01-01
True and false color views of an equatorial 'hotspot' on Jupiter. These images cover an area 34,000 kilometers by 11,000 kilometers. The top mosaic combines the violet (410 nanometers or nm) and near-infrared continuum (756 nm) filter images to create an image similar to how Jupiter would appear to human eyes. Differences in coloration are due to the composition and abundances of trace chemicals in Jupiter's atmosphere. The bottom mosaic uses Galileo's three near-infrared wavelengths (756 nm, 727 nm, and 889 nm displayed in red, green, and blue) to show variations in cloud height and thickness. Bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the deep cloud with an overlying thin haze. The light blue region to the left is covered by a very high haze layer. The multicolored region to the right has overlapping cloud layers of different heights. Galileo is the first spacecraft to distinguish cloud layers on Jupiter.
North is at the top. The mosaics cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees West. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers by the Solid State Imaging system aboard NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoAn efficient cloud detection method for high resolution remote sensing panchromatic imagery
NASA Astrophysics Data System (ADS)
Li, Chaowei; Lin, Zaiping; Deng, Xinpu
2018-04-01
In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.
Atmospheric Science Data Center
2013-04-22
article title: MISR Mystery Image Quiz #21 ... This mystery concerns a particular type of cloud, one example of which was imaged by the Multi-angle Imaging SpectroRadiometer (MISR) ... ) These clouds are commonly tracked using propeller-driven research aircraft. 3. Two of these statements are false. Which one is ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badosa, Jordi; Calbo, J.; McKenzie, R. L.
2014-07-01
In the present study, we assess the cloud effects on UV Index (UVI) and total solar radiation (TR) as a function of cloud cover estimations and sunny conditions (from sky imaging products) as well as of solar zenith angle (SZA). These analyses are undertaken for a southern-hemisphere mid-latitude site where a 10-years dataset is available. It is confirmed that clouds reduce TR more than UV, in particular for obscured Sun conditions, low cloud fraction (< 60%) and large SZA (> 60º). Similarly, clouds enhance TR more than UV, mainly for visible Sun conditions, large cloud fraction and large SZA. Twomore » methods to estimate UVI are developed: 1) from sky imaging cloud cover and sunny conditions, and 2) from TR measurements. Both methods may be used in practical operational applications, although Method 2 shows overall the best performance, since TR allows accounting for cloud optical properties. The mean absolute differences of Method 2 estimations with respect to measured values are 0.17 UVI units (for 1-minute data) and 0.79 Standard Erythemal Dose (SED) units (for daily integrations). Method 1 shows less accurate results but it is still suitable to estimate UVI: mean absolute differences are 0.37 UVI units and 1.6 SED.« less
NASA Technical Reports Server (NTRS)
2002-01-01
Because clouds represent an area of great uncertainty in studies of global climate, scientists are interested in better understanding the processes by which clouds form and change over time. In recent years, scientists have turned their attention to the ways in which human-produced aerosol pollution modifies clouds. One area that has drawn scientists' attention is 'ship tracks,' or clouds that form from the sulfate aerosols released by large ships. Although ships are not significant sources of pollution themselves, they do release enough sulfur dioxide in the exhaust from their smokestacks to modify overlying clouds. Specifically, the aerosol particles formed by the ship exhaust in the atmosphere cause the clouds to be more reflective, carry more water, and possibly inhibit them from precipitating. This is one example of how humans have been creating and modifying clouds for generations through the burning of fossil fuels. This image was acquired over the northern Pacific Ocean by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite, on April 29, 2002. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view from above and to the south of the visualized area, showing the entire model. The entire region is overlain by a thin, transparent haze. In places the haze is high and thick, especially to the east (to the right of) the hotspot.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.Research on cloud background infrared radiation simulation based on fractal and statistical data
NASA Astrophysics Data System (ADS)
Liu, Xingrun; Xu, Qingshan; Li, Xia; Wu, Kaifeng; Dong, Yanbing
2018-02-01
Cloud is an important natural phenomenon, and its radiation causes serious interference to infrared detector. Based on fractal and statistical data, a method is proposed to realize cloud background simulation, and cloud infrared radiation data field is assigned using satellite radiation data of cloud. A cloud infrared radiation simulation model is established using matlab, and it can generate cloud background infrared images for different cloud types (low cloud, middle cloud, and high cloud) in different months, bands and sensor zenith angles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Michael J.; Hayes, Daniel J
2014-01-01
Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less
2006-04-13
Bright, high altitude clouds, like those imaged here, often appear more filamentary or streak-like than clouds imaged at slightly deeper levels in Saturn atmosphere. This view also shows one of the many cat eye vortices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Stephen E.; Huang, Dong; Vladutescu, Daniela Viviana
This article describes the approach and presents initial results, for a period of several minutes in north central Oklahoma, of an examination of clouds by high resolution digital photography from the surface looking vertically upward. A commercially available camera having 35-mm equivalent focal length up to 1200 mm (nominal resolution as fine as 6 µrad, which corresponds to 9 mm for cloud height 1.5 km) is used to obtain a measure of zenith radiance of a 30 m × 30 m domain as a two-dimensional image consisting of 3456 × 3456 pixels (12 million pixels). Downwelling zenith radiance varies substantiallymore » within single images and between successive images obtained at 4-s intervals. Variation in zenith radiance found on scales down to about 10 cm is attributed to variation in cloud optical depth (COD). Attention here is directed primarily to optically thin clouds, COD less than about 2. A radiation transfer model used to relate downwelling zenith radiance to COD and to relate the counts in the camera image to zenith radiance, permits determination of COD on a pixel-by-pixel basis. COD for thin clouds determined in this way exhibits considerable variation, for example, an order of magnitude within 15 m, a factor of 2 within 4 m, and 25% (0.12 to 0.15) over 14 cm. In conclusion, this approach, which examines cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opens new avenues for examination of cloud structure and evolution.« less
Schwartz, Stephen E.; Huang, Dong; Vladutescu, Daniela Viviana
2017-03-08
This article describes the approach and presents initial results, for a period of several minutes in north central Oklahoma, of an examination of clouds by high resolution digital photography from the surface looking vertically upward. A commercially available camera having 35-mm equivalent focal length up to 1200 mm (nominal resolution as fine as 6 µrad, which corresponds to 9 mm for cloud height 1.5 km) is used to obtain a measure of zenith radiance of a 30 m × 30 m domain as a two-dimensional image consisting of 3456 × 3456 pixels (12 million pixels). Downwelling zenith radiance varies substantiallymore » within single images and between successive images obtained at 4-s intervals. Variation in zenith radiance found on scales down to about 10 cm is attributed to variation in cloud optical depth (COD). Attention here is directed primarily to optically thin clouds, COD less than about 2. A radiation transfer model used to relate downwelling zenith radiance to COD and to relate the counts in the camera image to zenith radiance, permits determination of COD on a pixel-by-pixel basis. COD for thin clouds determined in this way exhibits considerable variation, for example, an order of magnitude within 15 m, a factor of 2 within 4 m, and 25% (0.12 to 0.15) over 14 cm. In conclusion, this approach, which examines cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opens new avenues for examination of cloud structure and evolution.« less
1990-02-10
Range : 60,000 miles This image is a false-color version of a near- infrared map of lower-level clouds on the night side of Venus, obtained by the Near Infrared Mapping Spectrometer aboard Galileo. Taken at an infrared wavelength of 2.3 microns (about three times the longest wavelength visible to the human eye) the map shows the turbulent, cloudy middle atmosphere some 30-33 miles above the surface, 6-10 miles below the visible cloudtops. The image shows the radiant heat from the lower atmosphere (about 400 degrees F) shining through the sulfuric acid clouds, which appear as much as 10 times darker than the bright gaps between clouds. The colors indicate relative cloud transparency; white and red show thin cloud regions, while black and blue represent relatively this clouds. This cloud layer is at about 170 degrees F., at a pressure about 1/2 Earth's atmospheric pressure. About 2/3 of the dark hemisphere is visible, centered on longitude 350 West, with bright slivers of daylit high clouds visible at top and bottom left. Near the equator, the clouds appear fluffy and blocky; farther north, they are stretched out into East-West filaments by winds estimated at more than 150 mph, while the poles are capped by thick clouds at this altitude. The Near Infrared Mapping Spectrometer (NIMS) on the Galileo is a combined mapping (imaging) and spectral instrument. It can sense 408 contiguous wavelengths from 0.7 microns (deep red) to 5.2 microns, and can construct a map or image by mechanical scanning. It can spectroscopic-ally analyze atmospheres and surfaces and construct thermal and chemical maps. Designed and operated by scientists and engineers at the JPL, NIMS involves 15 scientists in the US, England and France.
76 FR 17158 - Assumption Buster Workshop: Distributed Data Schemes Provide Security
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-28
... Schemes Provide Security''. Distributed data architectures, such as cloud computing, offer very attractive... locating your data in the cloud, and by breaking it up and replicating different segments throughout the...
Hubble Spots Northern Hemispheric Clouds on Uranus
NASA Technical Reports Server (NTRS)
1997-01-01
Using visible light, astronomers for the first time this century have detected clouds in the northern hemisphere of Uranus. The newest images, taken July 31 and Aug. 1, 1997 with NASA Hubble Space Telescope's Wide Field and Planetary Camera 2, show banded structure and multiple clouds. Using these images, Dr. Heidi Hammel (Massachusetts Institute of Technology) and colleagues Wes Lockwood (Lowell Observatory) and Kathy Rages (NASA Ames Research Center) plan to measure the wind speeds in the northern hemisphere for the first time.
Uranus is sometimes called the 'sideways' planet, because its rotation axis tipped more than 90 degrees from the planet's orbit around the Sun. The 'year' on Uranus lasts 84 Earth years, which creates extremely long seasons - winter in the northern hemisphere has lasted for nearly 20 years. Uranus has also been called bland and boring, because no clouds have been detectable in ground-based images of the planet. Even to the cameras of the Voyager spacecraft in 1986, Uranus presented a nearly uniform blank disk, and discrete clouds were detectable only in the southern hemisphere. Voyager flew over the planet's cloud tops near the dead of northern winter (when the northern hemisphere was completely shrouded in darkness).Spring has finally come to the northern hemisphere of Uranus. The newest images, both the visible-wavelength ones described here and those taken a few days earlier with the Near Infrared and Multi-Object Spectrometer (NICMOS) by Erich Karkoschka (University of Arizona), show a planet with banded structure and detectable clouds.Two images are shown here. The 'aqua' image (on the left) is taken at 5,470 Angstroms, which is near the human eye's peak response to wavelength. Color has been added to the image to show what a person on a spacecraft near Uranus might see. Little structure is evident at this wavelength, though with image-processing techniques, a small cloud can be seen near the planet's northern limb (rightmost edge). The 'red' image (on the right) is taken at 6,190 Angstroms, and is sensitive to absorption by methane molecules in the planet's atmosphere. The banded structure of Uranus is evident, and the small cloud near the northern limb is now visible.Scientists are expecting that the discrete clouds and banded structure may become even more pronounced as Uranus continues in its slow pace around the Sun. 'Some parts of Uranus haven't seen the Sun in decades,' says Dr. Hammel, 'and historical records suggest that we may see the development of more banded structure and patchy clouds as the planet's year progresses.'Some scientists have speculated that the winds of Uranus are not symmetric around the planet's equator, but no clouds were visible to test those theories. The new data will provide the opportunity to measure the northern winds. Hammel and colleagues expect to have results soon.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http:// oposite.stsci.edu/pubinfo/The AIST Managed Cloud Environment
NASA Astrophysics Data System (ADS)
Cook, S.
2016-12-01
ESTO is currently in the process of developing and implementing the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to ESTO-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs will allow them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE will facilitate infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.
The AMCE (AIST Managed Cloud Environment)
NASA Astrophysics Data System (ADS)
Cook, S.
2017-12-01
ESTO has developed and implemented the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to SMD-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs allows them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE facilitates infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.
Cirrus cloud retrieval from MSG/SEVIRI during day and night using artificial neural networks
NASA Astrophysics Data System (ADS)
Strandgren, Johan; Bugliaro, Luca
2017-04-01
By covering a large part of the Earth, cirrus clouds play an important role in climate as they reflect incoming solar radiation and absorb outgoing thermal radiation. Nevertheless, the cirrus clouds remain one of the largest uncertainties in atmospheric research and the understanding of the physical processes that govern their life cycle is still poorly understood, as is their representation in climate models. To monitor and better understand the properties and physical processes of cirrus clouds, it's essential that those tenuous clouds can be observed from geostationary spaceborne imagers like SEVIRI (Spinning Enhanced Visible and InfraRed Imager), that possess a high temporal resolution together with a large field of view and play an important role besides in-situ observations for the investigation of cirrus cloud processes. CiPS (Cirrus Properties from Seviri) is a new algorithm targeting thin cirrus clouds. CiPS is an artificial neural network trained with coincident SEVIRI and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) observations in order to retrieve a cirrus cloud mask along with the cloud top height (CTH), ice optical thickness (IOT) and ice water path (IWP) from SEVIRI. By utilizing only the thermal/IR channels of SEVIRI, CiPS can be used during day and night making it a powerful tool for the cirrus life cycle analysis. Despite the great challenge of detecting thin cirrus clouds and retrieving their properties from a geostationary imager using only the thermal/IR wavelengths, CiPS performs well. Among the cirrus clouds detected by CALIOP, CiPS detects 70 and 95 % of the clouds with an optical thickness of 0.1 and 1.0 respectively. Among the cirrus free pixels, CiPS classify 96 % correctly. For the CTH retrieval, CiPS has a mean absolute percentage error of 10 % or less with respect to CALIOP for cirrus clouds with a CTH greater than 8 km. For the IOT retrieval, CiPS has a mean absolute percentage error of 100 % or less with respect to CALIOP for cirrus clouds with an optical thickness down to 0.07. For such thin cirrus clouds an error of 100 % should be regarded as low from a geostationary imager like SEVIRI. The IWP retrieved by CiPS shows a similar performance, but has larger deviations for the thinner cirrus clouds.
Glory, Vortex Street off Baja California
NASA Technical Reports Server (NTRS)
2007-01-01
On June 19, 2007, the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite captured both a vortex street and a glory visible amid the lattice of clouds over the Pacific Ocean off Baja California. In this image, the swirling clouds known as vortex streets appear along the left edge of the image, stretching southward from Isla Guadalupe. Another NASA satellite captured an earlier example of vortex streets in June 2000. These atmospheric vortices, known as Von Karman vortex streets, often occur in the wake of an obstacle to air flow, such as an island. Stratocumulus clouds--low-lying, sheets of puffy clouds-- over the ocean show the impact of the island on air flow visible though their alternating pattern of clockwise and counter-clockwise swirls. Southeast of the vortex street, a glory, which resembles a rainbow, hovers above the cloud cover. The glory is faint but large, 200 to 300 kilometers long, along a north-south orientation. This phenomenon can occur when the satellite passes directly between the Sun and a bank of clouds below. (People also observe them while looking down on clouds from airplanes.) Not just any kind of cloud can produce a glory; only clouds composed entirely of water droplets (as opposed to ice crystals) can make them. The droplets that form glories generally have diameters of less than 50 micrometers (a micrometers is a millionth of a meter). The water droplets bend the light, showing its different wavelengths, or colors. In this glory, reds and oranges are most visible. NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center.
ERIC Educational Resources Information Center
Wilson, Michael Jason
2009-01-01
This dissertation studies clouds over the polar regions using the Multi-angle Imaging SpectroRadiometer (MISR) on-board EOS-Terra. Historically, low thin clouds have been problematic for satellite detection, because these clouds have similar brightness and temperature properties to the surface they overlay. However, the oblique angles of MISR…
Supporting reputation based trust management enhancing security layer for cloud service models
NASA Astrophysics Data System (ADS)
Karthiga, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
In the existing system trust between cloud providers and consumers is inadequate to establish the service level agreement though the consumer’s response is good cause to assess the overall reliability of cloud services. Investigators recognized the significance of trust can be managed and security can be provided based on feedback collected from participant. In this work a face recognition system that helps to identify the user effectively. So we use an image comparison algorithm where the user face is captured during registration time and get stored in database. With that original image we compare it with the sample image that is already stored in database. If both the image get matched then the users are identified effectively. When the confidential data are subcontracted to the cloud, data holders will become worried about the confidentiality of their data in the cloud. Encrypting the data before subcontracting has been regarded as the important resources of keeping user data privacy beside the cloud server. So in order to keep the data secure we use an AES algorithm. Symmetric-key algorithms practice a shared key concept, keeping data secret requires keeping this key secret. So only the user with private key can decrypt data.
Improved Modeling Tools Development for High Penetration Solar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washom, Byron; Meagher, Kevin
2014-12-11
One of the significant objectives of the High Penetration solar research is to help the DOE understand, anticipate, and minimize grid operation impacts as more solar resources are added to the electric power system. For Task 2.2, an effective, reliable approach to predicting solar energy availability for energy generation forecasts using the University of California, San Diego (UCSD) Sky Imager technology has been demonstrated. Granular cloud and ramp forecasts for the next 5 to 20 minutes over an area of 10 square miles were developed. Sky images taken every 30 seconds are processed to determine cloud locations and cloud motionmore » vectors yielding future cloud shadow locations respective to distributed generation or utility solar power plants in the area. The performance of the method depends on cloud characteristics. On days with more advective cloud conditions, the developed method outperforms persistence forecasts by up to 30% (based on mean absolute error). On days with dynamic conditions, the method performs worse than persistence. Sky Imagers hold promise for ramp forecasting and ramp mitigation in conjunction with inverter controls and energy storage. The pre-commercial Sky Imager solar forecasting algorithm was documented with licensing information and was a Sunshot website highlight.« less
3D change detection at street level using mobile laser scanning point clouds and terrestrial images
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Gruen, Armin
2014-04-01
Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.
Titan Moving Mid-Latitude Clouds
2011-03-17
This image shows clouds in the mid-southern latitudes of Saturn largest moon, Titan, one of a series of images captured by NASA Cassini spacecraft a few months after fall began in the southern hemisphere.
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Accuracy assessment of building point clouds automatically generated from iphone images
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2014-06-01
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
Overview of CERES Cloud Properties Derived From VIRS AND MODIS DATA
NASA Technical Reports Server (NTRS)
Minis, Patrick; Geier, Erika; Wielicki, Bruce A.; Sun-Mack, Sunny; Chen, Yan; Trepte, Qing Z.; Dong, Xiquan; Doelling, David R.; Ayers, J. Kirk; Khaiyer, Mandana M.
2006-01-01
Simultaneous measurement of radiation and cloud fields on a global basis is recognized as a key component in understanding and modeling the interaction between clouds and radiation at the top of the atmosphere, at the surface, and within the atmosphere. The NASA Clouds and Earth s Radiant Energy System (CERES) Project (Wielicki et al., 1998) began addressing this issue in 1998 with its first broadband shortwave and longwave scanner on the Tropical Rainfall Measuring Mission (TRMM). This was followed by the launch of two CERES scanners each on Terra and Aqua during late 1999 and early 2002, respectively. When combined, these satellites should provide the most comprehensive global characterization of clouds and radiation to date. Unfortunately, the TRMM scanner failed during late 1998. The Terra and Aqua scanners continue to operate, however, providing measurements at a minimum of 4 local times each day. CERES was designed to scan in tandem with high resolution imagers so that the cloud conditions could be evaluated for every CERES measurement. The cloud properties are essential for converting CERES radiances shortwave albedo and longwave fluxes needed to define the radiation budget (ERB). They are also needed to unravel the impact of clouds on the ERB. The 5-channel, 2-km Visible Infrared Scanner (VIRS) on the TRMM and the 36-channel 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua are analyzed to define the cloud properties for each CERES footprint. To minimize inter-satellite differences and aid the development of useful climate-scale measurements, it was necessary to ensure that each satellite imager is calibrated in a fashion consistent with its counterpart on the other CERES satellites (Minnis et al., 2006) and that the algorithms are as similar as possible for all of the imagers. Thus, a set of cloud detection and retrieval algorithms were developed that could be applied to all three imagers utilizing as few channels as possible while producing stable and accurate cloud properties. This paper discusses the algorithms and results of applying those techniques to more than 5 years of Terra MODIS, 3 years of Aqua MODIS, and 4 years of TRMM VIRS data.
NASA Astrophysics Data System (ADS)
Tan, Xianyu; Showman, Adam
2016-10-01
Observational evidence have suggested active meteorology in the atmospheres of brown dwarfs (BDs) and directly imaged extrasolar giant planets (EGPs). In particular, a number of surveys for brown dwarfs showed that near-IR brightness variability is common for L and T dwarfs. Directly imaged EGPs share similar observations, and can be viewed as low-gravity versions of BDs. Clouds are believed to play the major role in shaping the thermal structure, dynamics and near-IR flux of these atmospheres. So far, only a few studies have been devoted to atmospheric circulation and the implications for observations of BDs and directly EGPs, and yet no global model includes a self-consistent active cloud formation. Here we present preliminary results from the first global circulation model applied to BDs and directly imaged EGPs that can properly treat absorption and scattering of radiation by cloud particles. Our results suggest that horizontal temperature differences on isobars can reach up to a few hundred Kelvins, with typical horizontal length scale of the temperature and cloud patterns much smaller than the radius of the object. The combination of temperature anomaly and cloud pattern can result in moderate disk-integrated near-IR flux variability. Wind speeds can reach several hundred meters per second in cloud forming layers. Unlike Jupiter and Saturn, we do not observe stable zonal jet/banded patterns in our simulations. Instead, our simulated atmospheres are typically turbulent and dominated by transient vortices. The circulation is sensitive to the parameterized cloud microphysics. Under some parameter combinations, global-scale atmospheric waves can be triggered and maintained. These waves induce global-scale temperature anomalies and cloud patterns, causing large (up to several percent) disk-integrated near-IR flux variability. Our results demonstrate that the commonly observed near-IR brightness variability for BDs and directly imaged EGPs can be explained by the typical cloud-induced turbulent circulation, and in particular, the large flux variability for some objects can be attributed to the global-scale patterns of temperature anomaly and cloud formation caused by atmospheric waves.
Jupiter's Northern Hemisphere in Violet Light (Time Set 3)
NASA Technical Reports Server (NTRS)
1997-01-01
Mosaic of Jupiter's northern hemisphere between 10 and 50 degrees latitude. Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the color and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including large white ovals, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two interacting vortices in the upper half of the mosaic is about 3500 kilometers. Light at 410 nanometers is affected by the sizes and compositions of cloud particles, as well as the trace chemicals that give Jupiter's clouds their colors. This mosaic shows the features of Jupiter's main visible cloud deck and the hazy cloud layer above it.
North is at the top. The images are projected on a sphere, with features being foreshortened towards the north. The planetary limb runs along the right edge of the mosaic. Cloud patterns appear foreshortened as they approach the limb. The smallest resolved features are tens of kilometers in size. These images were taken on April 3, 1997, at a range of 1.4 million kilometers by the Solid State Imaging system (CCD) on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoAstronomy In The Cloud: Using Mapreduce For Image Coaddition
NASA Astrophysics Data System (ADS)
Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-01-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.
Astronomy in the Cloud: Using MapReduce for Image Co-Addition
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-03-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
Galileo multispectral imaging of Earth.
Geissler, P; Thompson, W R; Greenberg, R; Moersch, J; McEwen, A; Sagan, C
1995-08-25
Nearly 6000 multispectral images of Earth were acquired by the Galileo spacecraft during its two flybys. The Galileo images offer a unique perspective on our home planet through the spectral capability made possible by four narrowband near-infrared filters, intended for observations of methane in Jupiter's atmosphere, which are not incorporated in any of the currently operating Earth orbital remote sensing systems. Spectral variations due to mineralogy, vegetative cover, and condensed water are effectively mapped by the visible and near-infrared multispectral imagery, showing a wide variety of biological, meteorological, and geological phenomena. Global tectonic and volcanic processes are clearly illustrated by these images, providing a useful basis for comparative planetary geology. Differences between plant species are detected through the narrowband IR filters on Galileo, allowing regional measurements of variation in the "red edge" of chlorophyll and the depth of the 1-micrometer water band, which is diagnostic of leaf moisture content. Although evidence of life is widespread in the Galileo data set, only a single image (at approximately 2 km/pixel) shows geometrization plausibly attributable to our technical civilization. Water vapor can be uniquely imaged in the Galileo 0.73-micrometer band, permitting spectral discrimination of moist and dry clouds with otherwise similar albedo. Surface snow and ice can be readily distinguished from cloud cover by narrowband imaging within the sensitivity range of Galileo's silicon CCD camera. Ice grain size variations can be mapped using the weak H2O absorption at 1 micrometer, a technique which may find important applications in the exploration of the moons of Jupiter. The Galileo images have the potential to make unique contributions to Earth science in the areas of geological, meteorological and biological remote sensing, due to the inclusion of previously untried narrowband IR filters. The vast scale and near global coverage of the Galileo data set complements the higher-resolution data from Earth orbiting systems and may provide a valuable reference point for future studies of global change.
2004-03-05
NASA's Cassini narrow angle camera took this image of Saturn on Feb. 16, 2004, from a distance of 66.1 million kilometers (41.1 million miles) in a special filter which reveals clouds and haze high in the atmosphere. The image scale is 397 kilometers (247 miles) per pixel. The MT2 spectral filter samples a near-infrared region of the electromagnetic spectrum where methane gas absorbs light at a wavelength of 727 nanometers. In the image, methane gas is uniformly mixed with hydrogen, the main gas in Saturn's atmosphere. Dark locales are places of strong methane absorption, relatively free of high clouds; the bright areas are places with high, thick clouds which shield the methane below. Image details reveal a high, thick equatorial cloud and a relatively deep or thin haze encircling the pole, as well as several distinct latitude bands with different cloud height attributes. It also shows a high atmospheric disturbance, just south of the equator, which has persisted throughout the 1990s in images returned by NASA's Hubble Space Telescope. Four of Saturn's moons are visible (clockwise from above right): Enceladus (499 kilometers, or 310 miles across); Mimas (396 kilometers, or 245 miles across); Tethys (1,060 kilometers, or 659 miles across); and Rhea (1,528 kilometers, or 949 miles across). The imaging team enhanced the brightness of Mimas and Enceladus by a factor of three. http://photojournal.jpl.nasa.gov/catalog/PIA05381
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Infrared cloud imaging in support of Earth-space optical communication.
Nugent, Paul W; Shaw, Joseph A; Piazzolla, Sabino
2009-05-11
The increasing need for high data return from near-Earth and deep-space missions is driving a demand for the establishment of Earth-space optical communication links. These links will require a nearly obstruction-free path to the communication platform, so there is a need to measure spatial and temporal statistics of clouds at potential ground-station sites. A technique is described that uses a ground-based thermal infrared imager to provide continuous day-night cloud detection and classification according to the cloud optical depth and potential communication channel attenuation. The benefit of retrieving cloud optical depth and corresponding attenuation is illustrated through measurements that identify cloudy times when optical communication may still be possible through thin clouds.
Space Shuttle Video Images: An Example of Warm Cloud Lightning
NASA Technical Reports Server (NTRS)
Vaughan, Otha H., Jr.; Boeck, William L.
1998-01-01
Warm cloud lightning has been reported in several tropical locations. We have been using the intensified monochrome TV cameras at night during a number of shuttle flights to observe large active thunderstorms and their associated lightning. During a nighttime orbital pass of the STS-70 mission on 17 July 1995 at 07:57:42 GMT, the controllers obtained video imagery of a small cloud that was producing lightning. Data from a GOES infrared image establishes that the cloud top had a temperature of about 271 degrees Kelvin ( -2 degrees Celsius). Since this cloud was electrified to the extent that a lightning discharge did occur, it may be another case of lightning in a cloud that presents little if any evidence of frozen or melting precipitation.
NASA Astrophysics Data System (ADS)
Khlopenkov, K. V.; Duda, D. P.; Thieman, M. M.; Sun-Mack, S.; Su, W.; Minnis, P.; Bedka, K. M.
2017-12-01
The Deep Space Climate Observatory (DSCOVR) is designed to study the daytime Earth radiation budget by means of onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). EPIC imager observes in several shortwave bands (317-780 nm), while NISTAR measures the top-of-atmosphere (TOA) whole-disk radiance in shortwave and total broadband windows. Calculation of albedo and outgoing longwave flux requires a high-resolution scene identification such as the radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers. These properties have to be co-located with EPIC imager pixels to provide scene identification and to select anisotropic directional models, which are then used to adjust the NISTAR-measured radiance and subsequently obtain the global daytime shortwave and longwave fluxes. This work presents an algorithm for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. The highest quality observation is selected by means of an aggregated rating which incorporates several factors such as the nearest time relative to EPIC observation, lowest viewing zenith angle, and others. This process provides a smoother transition and avoids abrupt changes in the merged composite data. Higher spatial accuracy in the composite product is achieved by using the inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into the EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. Within every EPIC footprint, the PSF-weighted average radiances and cloud properties are computed for each cloud phase and then stored within five data subsets (clear-sky, water cloud, ice cloud, total cloud, and no retrieval). Overall, the composite product has been generated for every EPIC observation from June 2015 to December 2016, typically 300-500 composites per month, which makes it useful for many climate applications.
Tropical Depression 6 Florence in the Atlantic
2006-09-03
This infrared image shows Tropical Depression 6 Florence in the Atlantic, from the Atmospheric Infrared Sounder AIRS on NASA Aqua satellite in September, 2006. Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red). http://photojournal.jpl.nasa.gov/catalog/PIA00512
1979-07-06
Range : 3.2 million km This image returned by Voyager 2 shows one of the long dark clouds observed in the North Equatorial Belt of Jupiter. A high, white cloud is seen moving over the darker cloud, providing an indication of the structure of the cloud layers. Thin white clouds are also seen within the dark cloud. At right, blue areas, free of high clouds, are seen.
Improved sliced velocity map imaging apparatus optimized for H photofragments.
Ryazanov, Mikhail; Reisler, Hanna
2013-04-14
Time-sliced velocity map imaging (SVMI), a high-resolution method for measuring kinetic energy distributions of products in scattering and photodissociation reactions, is challenging to implement for atomic hydrogen products. We describe an ion optics design aimed at achieving SVMI of H fragments in a broad range of kinetic energies (KE), from a fraction of an electronvolt to a few electronvolts. In order to enable consistently thin slicing for any imaged KE range, an additional electrostatic lens is introduced in the drift region for radial magnification control without affecting temporal stretching of the ion cloud. Time slices of ∼5 ns out of a cloud stretched to ⩾50 ns are used. An accelerator region with variable dimensions (using multiple electrodes) is employed for better optimization of radial and temporal space focusing characteristics at each magnification level. The implemented system was successfully tested by recording images of H fragments from the photodissociation of HBr, H2S, and the CH2OH radical, with kinetic energies ranging from <0.4 eV to >3 eV. It demonstrated KE resolution ≲1%-2%, similar to that obtained in traditional velocity map imaging followed by reconstruction, and to KE resolution achieved previously in SVMI of heavier products. We expect it to perform just as well up to at least 6 eV of kinetic energy. The tests showed that numerical simulations of the electric fields and ion trajectories in the system, used for optimization of the design and operating parameters, provide an accurate and reliable description of all aspects of system performance. This offers the advantage of selecting the best operating conditions in each measurement without the need for additional calibration experiments.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-05
... related to the development and application of cloud computing for people with disabilities. Cloud computing offers the potential to provide accommodations that enable people with disabilities to access...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
... activities related to the development and application of cloud computing for people with disabilities. Cloud computing offers the potential to provide accommodations that enable people with disabilities to access...
Space Radar Image of Maui, Hawaii
1999-04-15
This spaceborne radar image shows the Valley Island of Maui, Hawaii. The cloud-penetrating capabilities of radar provide a rare view of many parts of the island, since the higher elevations are frequently shrouded in clouds.
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander; Lau, William K. M. (Technical Monitor)
2001-01-01
This paper examines three-dimensional (3D) radiative effects, which arise from horizontal radiative interactions between areas that have different cloud properties. Earlier studies have argued that these effects can cause significant uncertainties in current satellite retrievals of cloud properties, because the retrievals rely on one-dimensional (1D) theory and do not consider the effects of horizontal changes in cloud properties. This study addresses two questions: which retrieved cloud properties are influenced by 3D radiative effects, and where 3D effects tend to occur? The influence of 3D effects is detected from the wayside illumination and shadowing make clouds appear asymmetric: Areas appear brighter if the cloud top surface is tilted toward, rather than away from, the Sun. The analysis of 30 images by the Moderate Resolution Imaging Spectroradiometer (MODIS) reveals that retrievals of cloud optical thickness and cloud water content are most influenced by 3D effects, whereas retrievals of cloud particle size are much less affected. The results also indicate that while 3D effects are strongest at cloud edges, cloud top variability in cloud interiors, even in overcast regions, also produces considerable 3D effects. Finally, significant 3D effects are found in a wide variety of situations, ranging from thin clouds to thick ones and from low clouds to high ones.
Identification Code of Interstellar Cloud within IRAF
NASA Astrophysics Data System (ADS)
Lee, Youngung; Jung, Jae Hoon; Kim, Hyun-Goo
1997-12-01
We present a code which identifies individual clouds in crowded region using IMFORT interface within Image Reduction and Analysis Facility(IRAF). We define a cloud as an object composed of all pixels in longitude, latitude, and velocity that are simply connected and that lie above some threshold temperature. The code searches the whole pixels of the data cube in efficient way to isolate individual clouds. Along with identification of clouds it is designed to estimate their mean values of longitudes, latitudes, and velocities. In addition, a function of generating individual images(or cube data) of identified clouds is added up. We also present identified individual clouds using a 12CO survey data cube of Galactic Anticenter Region(Lee et al. 1997) as a test example. We used a threshold temperature of 5 sigma rms noise level of the data. With a higher threshold temperature, we isolated subclouds of a huge cloud identified originally. As the most important parameter to identify clouds is the threshold value, its effect to the size and velocity dispersion is discussed rigorously.
The registration of non-cooperative moving targets laser point cloud in different view point
NASA Astrophysics Data System (ADS)
Wang, Shuai; Sun, Huayan; Guo, Huichao
2018-01-01
Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.
Cloud-based image sharing network for collaborative imaging diagnosis and consultation
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo
2018-03-01
In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
Suomi satellite brings to light a unique frontier of nighttime environmental sensing capabilities
Miller, Steven D.; Mills, Stephen P.; Elvidge, Christopher D.; Lindsey, Daniel T.; Lee, Thomas F.; Hawkins, Jeffrey D.
2012-01-01
Most environmental satellite radiometers use solar reflectance information when it is available during the day but must resort at night to emission signals from infrared bands, which offer poor sensitivity to low-level clouds and surface features. A few sensors can take advantage of moonlight, but the inconsistent availability of the lunar source limits measurement utility. Here we show that the Day/Night Band (DNB) low-light visible sensor on the recently launched Suomi National Polar-orbiting Partnership (NPP) satellite has the unique ability to image cloud and surface features by way of reflected airglow, starlight, and zodiacal light illumination. Examples collected during new moon reveal not only meteorological and surface features, but also the direct emission of airglow structures in the mesosphere, including expansive regions of diffuse glow and wave patterns forced by tropospheric convection. The ability to leverage diffuse illumination sources for nocturnal environmental sensing applications extends the advantages of visible-light information to moonless nights. PMID:22984179
2017-12-08
The late winter sun shone brightly on a stunning scene of clouds and ice in the Davis Strait in late February, 2013. The Moderate Resolution Imaging Spectroradiometer aboard NASA’s Aqua satellite captured this true-color image on February 22 at 1625 UTC. The Davis Strait connects the Labrador Sea (part of the Atlantic Ocean) in the south with Baffin Bay to the north, and separates Canada, to the west, from Greenland to the east. Strong, steady winds frequently blow southward from the colder Baffin Bay to the warmer waters of the Labrador Sea. Over ice, the air is dry and no clouds form. However, as the Arctic air moves over the warmer, open water the rising moist air and the temperature differential gives rise to lines of clouds. In this image, the clouds are aligned in a beautiful, parallel pattern. Known as “cloud streets”, this pattern is formed in a low-level wind, with the clouds aligning in the direction of the wind. Credit: NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-04-01
This paper presents an investigation of the expected uncertainties of a single-channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC Sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single-channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single-channel COT retrieval is feasible for EPIC. For ice clouds, single-channel retrieval errors are minimal (< 2 %) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 %, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
NASA Astrophysics Data System (ADS)
Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev
2016-03-01
Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
NASA Astrophysics Data System (ADS)
Miller, S. D.; Seaman, C.; Combs, C.; Solbrig, J. E.; Straka, W. C.; Walther, A.; NOH, Y. J.; Heidinger, A.
2016-12-01
Since its launch in October 2011, the Visible/Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) on the Suomi National Polar-orbiting Partnership (S-NPP) satellite has delivered above and beyond expectations, revolutionizing our ability to observe and characterize the nocturnal environment. Taking advantage of natural and artificial (man-made) light sources, the DNB offers unique information content ranging from the surface to the upper atmosphere. Notable developments include the quantitative use of moonlight for cloud property retrievals and the discovery of nightglow sensitivity revealing the signatures of gravity waves. The DNB represents a remarkable advance to the heritage low-light visible sensing of the Operational Linescan System (OLS), providing spatial and radiometric resolution unprecedented to the space platform. Soon, we will have yet another dimension of resolution to consider—temporal. In early 2017, NOAA's Joint Polar Satellite System-1 (J1) will join S-NPP in early afternoon (1330 local time, ascending node) sun-synchronous orbital plane, displaced ½ orbit ( 50 min) from S-NPP. Having two DNB sensors will offer an expanded ability (lower latitudes) to examine the temporal properties of various light sources, track the motion of ships, low-level clouds and dust storms, fire line evolution, cloud optical properties, and even the dynamics of mesospheric gravity wave structures such as thunderstorm-induced concentric gravity waves and mesospheric bores. This presentation will provide an update to the science and application-oriented research involving the S-NPP/DNB, examples of key capabilities, first results of lunar irradiance model validation, and a look ahead toward the new research opportunities to be afforded by tandem S-NPP/J1 observations. The AGU is well-positioned for anticipating these capabilities "on the eve" of the J1 launch.
Discrete cloud structure on Neptune
NASA Technical Reports Server (NTRS)
Hammel, H. B.
1989-01-01
Recent CCD imaging data for the discrete cloud structure of Neptune shows that while cloud features at CH4-band wavelengths are manifest in the southern hemisphere, they have not been encountered in the northern hemisphere since 1986. A literature search has shown the reflected CH4-band light from the planet to have come from a single discrete feature at least twice in the last 10 years. Disk-integrated photometry derived from the imaging has demonstrated that a bright cloud feature was responsible for the observed 8900 A diurnal variation in 1986 and 1987.
NASA Astrophysics Data System (ADS)
Tosca, M. G.; Diner, D. J.; Garay, M. J.; Kalashnikova, O.
2013-12-01
Anthropogenic fires in Southeast Asia and Central America emit smoke that affects cloud dynamics, meteorology, and climate. We measured the cloud response to direct and indirect forcing from biomass burning aerosols using aerosol retrievals from the Multi-angle Imaging SpectroRadiometer (MISR) and non-synchronous cloud retrievals from the MODerate resolution Imaging Spectroradiometer (MODIS) from collocated morning and afternoon overpasses. Level 2 data from thirty-one individual scenes acquired between 2006 and 2010 were used to quantify changes in cloud fraction, cloud droplet size, cloud optical depth and cloud top temperature from morning (10:30am local time) to afternoon (1:30pm local time) in the presence of varying aerosol burdens. We accounted for large-scale meteorological differences between scenes by normalizing observed changes to the mean difference per individual scene. Elevated AODs reduced cloud fraction and cloud droplet size and increased cloud optical depths in both Southeast Asia and Central America. In mostly cloudy regions, aerosols significantly reduced cloud fraction and cloud droplet sizes, but in clear skies, cloud fraction, cloud optical thickness and cloud droplet sizes increased. In clouds with vertical development, aerosols reduced cloud fraction via semi-direct effects but spurred cloud growth via indirect effects. These results imply a positive feedback loop between anthropogenic burning and cloudiness in both Central America and Southeast Asia, and are consistent with previous studies linking smoke aerosols to both cloud reduction and convective invigoration.
NASA Technical Reports Server (NTRS)
2007-01-01
These two Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) images were acquired over the northern plains of Mars near one of the possible landing sites for NASA's Phoenix mission, set to launch in August 2007. The lower right image was acquired first, on Nov. 29, 2006, at 0720 UTC (2:20 a.m. EST), while the upper left image was acquired about one month later on Dec. 26, 2006, at 0030 UTC (or Dec. 25, 2006, at 7:30 p.m. EST). The CRISM data were taken in 544 colors covering the wavelength range from 0.36-3.92 micrometers, and show features as small as about 20 meters (66 feet) across. The images shown above are red-green-blue color composites using wavelengths 0.71, 0.6, and 0.53 micrometers, respectively (or infrared, red, and green light), and are overlain on a mosaic of Mars Odyssey Thermal Emission Imaging System (THEMIS) visible data. Each image covers a region about 11 kilometers (6.6 miles) wide at its narrowest, and they overlap near 71.0 degrees north latitude, 252.8 degrees east longitude The Earth equivalent to the season and latitude of this site is late summer in northern Canada, above the Arctic Circle. At that season and latitude, Martian weather conditions are transitioning from summer with generally clear skies, occasional weather fronts, and infrequent dust storms, to an autumn with pervasive, thick water-ice clouds. The striking difference in the appearance of the images is caused by the seasonal development of water-ice clouds. The earlier (lower right) image is cloud-free, and surface features can clearly be seen - like the small crater in the upper left. However, the clouds and haze in the later (upper left) image make it hard to see the surface. There are variations in the thickness and spacing of the clouds, just like clouds on Earth. On other days when nearby sites were imaged, the cloud cover varied day-to-day, but as the seasons change the trend is more and thicker clouds. With the onset of autumn the clouds will gradually cover the area and, just as with autumn on Earth, the Martian day is getting shorter at these high northern latitudes. In a few more months this area will settle into winter darkness and be covered in a layer of frost and carbon dioxide snow. CRISM's mission: Find the spectral fingerprints of aqueous and hydrothermal deposits and map the geology, composition and stratigraphy of surface features. The instrument will also watch the seasonal variations in Martian dust and ice aerosols, and water content in surface materials -- leading to new understanding of the climate. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad.Technique for ship/wake detection
Roskovensky, John K [Albuquerque, NM
2012-05-01
An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.
D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server
NASA Astrophysics Data System (ADS)
Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.
2017-11-01
The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.
DSCOVR/EPIC observations of SO2 reveal dynamics of young volcanic eruption clouds
NASA Astrophysics Data System (ADS)
Carn, S. A.; Krotkov, N. A.; Taylor, S.; Fisher, B. L.; Li, C.; Bhartia, P. K.; Prata, F. J.
2017-12-01
Volcanic emissions of sulfur dioxide (SO2) and ash have been measured by ultraviolet (UV) and infrared (IR) sensors on US and European polar-orbiting satellites since the late 1970s. Although successful, the main limitation of these observations from low Earth orbit (LEO) is poor temporal resolution (once per day at low latitudes). Furthermore, most currently operational geostationary satellites cannot detect SO2, a key tracer of volcanic plumes, limiting our ability to elucidate processes in fresh, rapidly evolving volcanic eruption clouds. In 2015, the launch of the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) provided the first opportunity to observe volcanic clouds from the L1 Lagrange point. EPIC is a 10-band spectroradiometer spanning UV to near-IR wavelengths with two UV channels sensitive to SO2, and a ground resolution of 25 km. The unique L1 vantage point provides continuous observations of the sunlit Earth disk, from sunrise to sunset, offering multiple daily observations of volcanic SO2 and ash clouds in the EPIC field of view. When coupled with complementary retrievals from polar-orbiting UV and IR sensors such as the Ozone Monitoring Instrument (OMI), the Ozone Mapping and Profiler Suite (OMPS), and the Atmospheric Infrared Sounder (AIRS), we demonstrate how the increased observation frequency afforded by DSCOVR/EPIC permits more timely volcanic eruption detection and novel analyses of the temporal evolution of volcanic clouds. Although EPIC has detected several mid- to high-latitude volcanic eruptions since launch, we focus on recent eruptions of Bogoslof volcano (Aleutian Islands, AK, USA). A series of EPIC exposures from May 28-29, 2017, uniquely captures the evolution of SO2 mass in a young Bogoslof eruption cloud, showing separation of SO2- and ice-rich regions of the cloud. We show how analyses of these sequences of EPIC SO2 data can elucidate poorly understood processes in transient eruption clouds, such as the relative roles of H2S oxidation and ice scavenging in modifying volcanic SO2 emissions. Detection of these relatively small events also proves EPIC's ability to provide timely detection of volcanic clouds in the upper troposphere and lower stratosphere.
Analysis of the Security and Privacy Requirements of Cloud-Based Electronic Health Records Systems
Fernández, Gonzalo; López-Coronado, Miguel
2013-01-01
Background The Cloud Computing paradigm offers eHealth systems the opportunity to enhance the features and functionality that they offer. However, moving patients’ medical information to the Cloud implies several risks in terms of the security and privacy of sensitive health records. In this paper, the risks of hosting Electronic Health Records (EHRs) on the servers of third-party Cloud service providers are reviewed. To protect the confidentiality of patient information and facilitate the process, some suggestions for health care providers are made. Moreover, security issues that Cloud service providers should address in their platforms are considered. Objective To show that, before moving patient health records to the Cloud, security and privacy concerns must be considered by both health care providers and Cloud service providers. Security requirements of a generic Cloud service provider are analyzed. Methods To study the latest in Cloud-based computing solutions, bibliographic material was obtained mainly from Medline sources. Furthermore, direct contact was made with several Cloud service providers. Results Some of the security issues that should be considered by both Cloud service providers and their health care customers are role-based access, network security mechanisms, data encryption, digital signatures, and access monitoring. Furthermore, to guarantee the safety of the information and comply with privacy policies, the Cloud service provider must be compliant with various certifications and third-party requirements, such as SAS70 Type II, PCI DSS Level 1, ISO 27001, and the US Federal Information Security Management Act (FISMA). Conclusions Storing sensitive information such as EHRs in the Cloud means that precautions must be taken to ensure the safety and confidentiality of the data. A relationship built on trust with the Cloud service provider is essential to ensure a transparent process. Cloud service providers must make certain that all security mechanisms are in place to avoid unauthorized access and data breaches. Patients must be kept informed about how their data are being managed. PMID:23965254
Analysis of the security and privacy requirements of cloud-based electronic health records systems.
Rodrigues, Joel J P C; de la Torre, Isabel; Fernández, Gonzalo; López-Coronado, Miguel
2013-08-21
The Cloud Computing paradigm offers eHealth systems the opportunity to enhance the features and functionality that they offer. However, moving patients' medical information to the Cloud implies several risks in terms of the security and privacy of sensitive health records. In this paper, the risks of hosting Electronic Health Records (EHRs) on the servers of third-party Cloud service providers are reviewed. To protect the confidentiality of patient information and facilitate the process, some suggestions for health care providers are made. Moreover, security issues that Cloud service providers should address in their platforms are considered. To show that, before moving patient health records to the Cloud, security and privacy concerns must be considered by both health care providers and Cloud service providers. Security requirements of a generic Cloud service provider are analyzed. To study the latest in Cloud-based computing solutions, bibliographic material was obtained mainly from Medline sources. Furthermore, direct contact was made with several Cloud service providers. Some of the security issues that should be considered by both Cloud service providers and their health care customers are role-based access, network security mechanisms, data encryption, digital signatures, and access monitoring. Furthermore, to guarantee the safety of the information and comply with privacy policies, the Cloud service provider must be compliant with various certifications and third-party requirements, such as SAS70 Type II, PCI DSS Level 1, ISO 27001, and the US Federal Information Security Management Act (FISMA). Storing sensitive information such as EHRs in the Cloud means that precautions must be taken to ensure the safety and confidentiality of the data. A relationship built on trust with the Cloud service provider is essential to ensure a transparent process. Cloud service providers must make certain that all security mechanisms are in place to avoid unauthorized access and data breaches. Patients must be kept informed about how their data are being managed.
A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission
NASA Technical Reports Server (NTRS)
Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.
2011-01-01
This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.
NASA Astrophysics Data System (ADS)
Cucchiaro, S.; Maset, E.; Fusiello, A.; Cazorzi, F.
2018-05-01
In recent years, the combination of Structure-from-Motion (SfM) algorithms and UAV-based aerial images has revolutionised 3D topographic surveys for natural environment monitoring, offering low-cost, fast and high quality data acquisition and processing. A continuous monitoring of the morphological changes through multi-temporal (4D) SfM surveys allows, e.g., to analyse the torrent dynamic also in complex topography environment like debris-flow catchments, provided that appropriate tools and procedures are employed in the data processing steps. In this work we test two different software packages (3DF Zephyr Aerial and Agisoft Photoscan) on a dataset composed of both UAV and terrestrial images acquired on a debris-flow reach (Moscardo torrent - North-eastern Italian Alps). Unlike other papers in the literature, we evaluate the results not only on the raw point clouds generated by the Structure-from- Motion and Multi-View Stereo algorithms, but also on the Digital Terrain Models (DTMs) created after post-processing. Outcomes show differences between the DTMs that can be considered irrelevant for the geomorphological phenomena under analysis. This study confirms that SfM photogrammetry can be a valuable tool for monitoring sediment dynamics, but accurate point cloud post-processing is required to reliably localize geomorphological changes.
Stellar Debris in the Large Magellanic Cloud
2006-12-08
This is a composite image of N49, the brightest supernova remnant in optical light in the Large Magellanic Cloud; the image combines data from the Chandra X-ray Telescope blue and NASA Spitzer Space Telescope red.
1998-06-04
This processed color image of Jupiter was produced in 1990 by the U.S. Geological Survey from a Voyager image captured in 1979. Zones of light-colored, ascending clouds alternate with bands of dark, descending clouds. http://photojournal.jpl.nasa.gov/catalog/PIA00343
NASA CloudSat Captures Hurricane Daniel Transformation
2006-07-25
Hurricane Daniel intensified between July 18 and July 23rd. NASA new CloudSat satellite was able to capture and confirm this transformation in its side-view images of Hurricane Daniel as seen in this series of images
2007-06-28
This image was taken on June 26, 2007, UTC 20:00. In this image an obvious storm hangs over the middle of the United States. Figure 1 shows NASA CloudSat data looking, in profile, at the cloud in this storm.
Voyager 2 Jupiter Eruption Movie
NASA Technical Reports Server (NTRS)
2000-01-01
This movie records an eruptive event in the southern hemisphere of Jupiter over a period of 8 Jupiter days. Prior to the event, an undistinguished oval cloud mass cruised through the turbulent atmosphere. The eruption occurs over avery short time at the very center of the cloud. The white eruptive material is swirled about by the internal wind patterns of the cloud. As a result of the eruption, the cloud then becomes a type of feature seen elsewhere on Jupiter known as 'spaghetti bowls'.
As Voyager 2 approached Jupiter in 1979, it took images of the planet at regular intervals. This sequence is made from 8 images taken once every Jupiter rotation period (about 10 hours). These images were acquired in the Violet filter around May 6, 1979. The spacecraft was about 50 million kilometers from Jupiter at that time.This time-lapse movie was produced at JPL by the Image Processing Laboratory in 1979.Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas
2010-01-01
This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).
Seasonality of Forcing by Carbonaceous Aerosols
NASA Astrophysics Data System (ADS)
Habib, G.; Bond, T.; Rasch, P. J.; Coleman, D.
2006-12-01
Aerosols can influence the energy balance of Earth-Atmosphere system with profound effect on regional climate. Atmospheric processes, such as convection, scavenging, wet and dry deposition, govern the lifetime and location of aerosol; emissions affect its quantity and location. Both affect climate forcing. Here we investigate the effect of seasonality in emissions and atmospheric processes on radiative forcing by carbonaceous aerosols, focusing on aerosol from fossil fuel and biofuel. Because aerosol lifetime is seasonal, ignoring the seasonality of sources such as residential biofuel may introduce a bias in aerosol burden and therefore in predicted climate forcing. We present a global emission inventory of carbonaceous aerosols with seasonality, and simulate atmospheric concentrations using the Community Atmosphere Model (CAM). We discuss where and when the seasonality of emissions and atmospheric processes has strong effects on atmospheric burden, lifetime, climate forcing and aerosol optical depth (AOD). Previous work has shown that aerosol forcing is higher in summer than in winter, and has identified the importance of aerosol above cloud in determining black carbon forcing. We show that predicted cloud height is a very important factor in determining normalized radiative forcing (forcing per mass), especially in summer. This can affect the average summer radiative forcing by nearly 50%. Removal by cloud droplets is the dominant atmospheric cleansing mechanism for carbonaceous aerosols. We demonstrate the modeled seasonality of removal processes and compare the importance of scavenging by warm and cold clouds. Both types of clouds contribute significantly to aerosol removal. We estimate uncertainty in direct radiative forcing due to scavenging by tagging the aerosol which has experienced cloud interactions. Finally, seasonal variations offer an opportunity to assess modeled processes when a single process dominates variability. We identify regions where aerosol burden is most sensitive to convection and scavenging in warm and cold clouds, and compare seasonally modeled AOD with that retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS).
First observations of volcanic eruption clouds from L1 by DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Carn, S. A.; Krotkov, N. A.; Taylor, S.; Fisher, B. L.; Li, C.; Hughes, E. J.; Bhartia, P. K.; Prata, F.
2016-12-01
Volcanic emissions of sulfur dioxide (SO2) and ash have been measured by ultraviolet (UV) sensors on US and European polar-orbiting satellites since the late 1970s. Although successful, the main limitation of these UV observations from low-Earth orbit has been poor temporal resolution. Timeliness can be crucial when detecting hazardous volcanic eruption clouds that threaten aviation, and most operational geostationary satellites cannot detect SO2, a key tracer of volcanic plumes. In 2015, the launch of the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) provided the first opportunity to observe volcanic clouds from the L1 Lagrange point. EPIC is a 10-band spectroradiometer spanning UV to near-IR wavelengths with two UV channels sensitive to SO2, and a ground resolution of 25 km. The unique L1 vantage point provides continuous observations of the sunlit Earth disk, potentially offering multiple daily observations of volcanic SO2 and ash clouds in the EPIC field of view. When coupled with complementary retrievals from polar-orbiting UV and infrared (IR) sensors such as the Ozone Monitoring Instrument (OMI), the Ozone Mapping and Profiler Suite (OMPS), and the Atmospheric Infrared Sounder (AIRS), the increased observation frequency afforded by DSCOVR/EPIC will permit more timely volcanic eruption detection, improved trajectory modeling, and novel analyses of the temporal evolution of volcanic clouds. We demonstrate the sensitivity of EPIC UV radiances to volcanic clouds using examples from the first year of EPIC observations including the December 2015 paroxysmal eruption of Etna volcano (Italy). When combined with OMI and OMPS measurements, the EPIC SO2 data permit hourly tracking of the Etna eruption cloud as it drifts away from the volcano. We also describe ongoing efforts to adapt existing UV backscatter (BUV) algorithms to produce operational EPIC SO2 and Ash Index (AI) products.
Progress in Near Real-Time Volcanic Cloud Observations Using Satellite UV Instruments
NASA Astrophysics Data System (ADS)
Krotkov, N. A.; Yang, K.; Vicente, G.; Hughes, E. J.; Carn, S. A.; Krueger, A. J.
2011-12-01
Volcanic clouds from explosive eruptions can wreak havoc in many parts of the world, as exemplified by the 2010 eruption at the Eyjafjöll volcano in Iceland, which caused widespread disruption to air traffic and resulted in economic impacts across the globe. A suite of satellite-based systems offer the most effective means to monitor active volcanoes and to track the movement of volcanic clouds globally, providing critical information for aviation hazard mitigation. Satellite UV sensors, as part of this suite, have a long history of making unique near-real time (NRT) measurements of sulfur dioxide (SO2) and ash (aerosol Index) in volcanic clouds to supplement operational volcanic ash monitoring. Recently a NASA application project has shown that the use of near real-time (NRT,i.e., not older than 3 h) Aura/OMI satellite data produces a marked improvement in volcanic cloud detection using SO2 combined with Aerosol Index (AI) as a marker for ash. An operational online NRT OMI AI and SO2 image and data product distribution system was developed in collaboration with the NOAA Office of Satellite Data Processing and Distribution. Automated volcanic eruption alarms, and the production of volcanic cloud subsets for multiple regions are provided through the NOAA website. The data provide valuable information in support of the U.S. Federal Aviation Administration goal of a safe and efficient National Air Space. In this presentation, we will highlight the advantages of UV techniques and describe the advances in volcanic SO2 plume height estimation and enhanced volcanic ash detection using hyper-spectral UV measurements, illustrated with Aura/OMI observations of recent eruptions. We will share our plan to provide near-real-time volcanic cloud monitoring service using the Ozone Mapping and Profiler Suite (OMPS) on the Joint Polar Satellite System (JPSS).
Yang, Xiaomei; Zhou, Chenghu; Li, Zhi
2017-01-01
Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features. PMID:28914787
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu; Li, Zhi
2017-09-15
Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features.
Specular Reflection of Sunlight from Earth
NASA Astrophysics Data System (ADS)
Varnai, T.; Marshak, A.
2018-02-01
The Deep Space Gateway vantage point offers advantages in observing specular reflection from water surfaces or ice crystals in clouds. Such data can give information on clouds and atmospheric aerosols, and help test algorithms of future exoplanet characterization.
Stereo Cloud Height and Wind Determination Using Measurements from a Single Focal Plane
NASA Astrophysics Data System (ADS)
Demajistre, R.; Kelly, M. A.
2014-12-01
We present here a method for extracting cloud heights and winds from an aircraft or orbital platform using measurements from a single focal plane, exploiting the motion of the platform to provide multiple views of the cloud tops. To illustrate this method we use data acquired during aircraft flight tests of a set of simple stereo imagers that are well suited to this purpose. Each of these imagers has three linear arrays on the focal plane, one looking forward, one looking aft, and one looking down. Push-broom images from each of these arrays are constructed, and then a spatial correlation analysis is used to deduce the delays and displacements required for wind and cloud height determination. We will present the algorithms necessary for the retrievals, as well as the methods used to determine the uncertainties of the derived cloud heights and winds. We will apply the retrievals and uncertainty determination to a number of image sets acquired by the airborne sensors. We then generalize these results to potential space based observations made by similar types of sensors.
Jupiter's Equatorial Region in a Methane band (Time set 3)
NASA Technical Reports Server (NTRS)
1997-01-01
Mosaic of an equatorial 'hotspot' on Jupiter at 889 nanometers (nm). The mosaic covers an area of 34,000 kilometers by 11,000 kilometers. Light at 889 nm is strongly absorbed by atmospheric methane. This image shows the features of a hazy cloud layer tens of kilometers above Jupiter's main visible cloud deck. This haze varies in height but appears to be present over the entire region. Small patches of very bright clouds may be similar to terrestrial thunderstorms. The dark region near the center of the mosaic is an equatorial 'hotspot' similar to the Galileo Probe entry site. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance.
North is at the top. The mosaic covers latitudes 1 to 10 degrees and is centered at longitude 336 degrees West. The planetary limb runs along the right edge of the image. Cloud patterns appear foreshortened as they approach the limb. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers by the Solid State Imaging system aboard NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoNASA Astrophysics Data System (ADS)
Abdelmonem, A.; Schnaiter, M.; Amsler, P.; Hesse, E.; Meyer, J.; Leisner, T.
2011-10-01
Studying the radiative impact of cirrus clouds requires knowledge of the relationship between their microphysics and the single scattering properties of cloud particles. Usually, this relationship is obtained by modeling the optical scattering properties from in situ measurements of ice crystal size distributions. The measured size distribution and the assumed particle shape might be erroneous in case of non-spherical ice particles. We present here a novel optical sensor (the Particle Habit Imaging and Polar Scattering probe, PHIPS) designed to measure simultaneously the 3-D morphology and the corresponding optical and microphysical parameters of individual cloud particles. Clouds containing particles ranging from a few micrometers to about 800 μm diameter in size can be characterized systematically with an optical resolution power of 2 μm and polar scattering resolution of 1° for forward scattering directions (from 1° to 10°) and 8° for side and backscattering directions (from 18° to 170°). The maximum acquisition rates for scattering phase functions and images are 262 KHz and 10 Hz, respectively. Some preliminary results collected in two ice cloud campaigns conducted in the AIDA cloud simulation chamber are presented. PHIPS showed reliability in operation and produced size distributions and images comparable to those given by other certified cloud particles instruments. A 3-D model of a hexagonal ice plate is constructed and the corresponding scattering phase function is compared to that modeled using the Ray Tracing with Diffraction on Facets (RTDF) program. PHIPS is a highly promising novel airborne optical sensor for studying the radiative impact of cirrus clouds and correlating the particle habit-scattering properties which will serve as a reference for other single, or multi-independent, measurement instruments.
Venus - Lower-level Nightside Clouds As Seen By NIMS
NASA Technical Reports Server (NTRS)
1990-01-01
These images are two versions of a near-infrared map of lower-level clouds on the night side of Venus, obtained by the Near Infrared Mapping Spectrometer aboard the Galileo spacecraft as it approached the planet February 10, 1990. Taken from an altitude of about 22,000 miles above the planet, at an infrared wavelength of 2.3 microns (about three times the longest wavelength visible to the human eye) the map shows an area of the turbulent, cloudy middle atmosphere some 30-33 miles above the surface, 6-10 miles below the visible cloudtops. With a spatial resolution of about 13 miles, this is the sharpest image ever obtained of the mid-level clouds of Venus. The image to the left shows the radiant heat from the lower atmosphere (about 400 degrees Fahrenheit) shining through the sulfuric acid clouds, which appear as much as 10 times darker than the bright gaps between clouds. This cloud layer is at about - 30 degrees Fahrenheit, at a pressure about 1/2 Earth's atmospheric pressure. This high-resolution map covers a 40- degree-wide sector of the Northern Hemisphere. The several irregular vertical stripes are data dropouts. The right image, a modified negative, represents what scientists believe would be the visual appearance of this mid-level cloud deck in daylight, with the clouds reflecting sunlight instead of blocking out infrared from the hot planet and lower atmosphere. Near the equator, the clouds appear fluffy and blocky; farther north, they are stretched out into East-West filaments by winds estimated at more than 150 mph, while the poles are capped by thick clouds at this altitude. The Near Infrared Mapping Spectrometer (NIMS) on the Galileo spacecraft is a combined mapping (imaging) and spectral instrument. It can sense 408 contiguous wavelengths from 0.7 microns (deep red) to 5.2 microns, and can construct a map or image by mechanical scanning. It can spectroscopically analyze atmospheres and surfaces and construct thermal and chemical maps. Designed and operated by scientists and engineers at the Jet Propulsion Laboratory, NIMS involves 15 scientists in the U.S., England, and France. The Galileo Project is managed for NASA's Office of Space Science and Applications by JPL; its mission is to study the planet Jupiter and its satellites and magnetosphere after multiple gravity-assist flybys at Venus and the Earth.
Characterization of clouds in Titan's tropical atmosphere
Griffith, C.A.; Penteado, P.; Rodriguez, S.; Le, Mouelic S.; Baines, K.H.; Buratti, B.; Clark, R.; Nicholson, P.; Jaumann, R.; Sotin, Christophe
2009-01-01
Images of Titan's clouds, possible over the past 10 years, indicate primarily discrete convective methane clouds near the south and north poles and an immense stratiform cloud, likely composed of ethane, around the north pole. Here we present spectral images from Cassini's Visual Mapping Infrared Spectrometer that reveal the increasing presence of clouds in Titan's tropical atmosphere. Radiative transfer analyses indicate similarities between summer polar and tropical methane clouds. Like their southern counterparts, tropical clouds consist of particles exceeding 5 ??m. They display discrete structures suggestive of convective cumuli. They prevail at a specific latitude band between 8??-20?? S, indicative of a circulation origin and the beginning of a circulation turnover. Yet, unlike the high latitude clouds that often reach 45 km altitude, these discrete tropical clouds, so far, remain capped to altitudes below 26 km. Such low convective clouds are consistent with the highly stable atmospheric conditions measured at the Huygens landing site. Their characteristics suggest that Titan's tropical atmosphere has a dry climate unlike the south polar atmosphere, and despite the numerous washes that carve the tropical landscape. ?? 2009. The American Astronomical Society.
Retrieval of Cloud Properties for Partially Cloud-Filled Pixels During CRYSTAL-FACE
NASA Astrophysics Data System (ADS)
Nguyen, L.; Minnis, P.; Smith, W. L.; Khaiyer, M. M.; Heck, P. W.; Sun-Mack, S.; Uttal, T.; Comstock, J.
2003-12-01
Partially cloud-filled pixels can be a significant problem for remote sensing of cloud properties. Generally, the optical depth and effective particle sizes are often too small or too large, respectively, when derived from radiances that are assumed to be overcast but contain radiation from both clear and cloud areas within the satellite imager field of view. This study presents a method for reducing the impact of such partially cloud field pixels by estimating the cloud fraction within each pixel using higher resolution visible (VIS, 0.65mm) imager data. Although the nominal resolution for most channels on the Geostationary Operational Environmental Satellite (GOES) imager and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra are 4 and 1 km, respectively, both instruments also take VIS channel data at 1 km and 0.25 km, respectively. Thus, it may be possible to obtain an improved estimate of cloud fraction within the lower resolution pixels by using the information contained in the higher resolution VIS data. GOES and MODIS multi-spectral data, taken during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE), are analyzed with the algorithm used for the Atmospheric Radiation Measurement Program (ARM) and the Clouds and Earth's Radiant Energy System (CERES) to derive cloud amount, temperature, height, phase, effective particle size, optical depth, and water path. Normally, the algorithm assumes that each pixel is either entirely clear or cloudy. In this study, a threshold method is applied to the higher resolution VIS data to estimate the partial cloud fraction within each low-resolution pixel. The cloud properties are then derived from the observed low-resolution radiances using the cloud cover estimate to properly extract the radiances due only to the cloudy part of the scene. This approach is applied to both GOES and MODIS data to estimate the improvement in the retrievals for each resolution. Results are compared with the radar reflectivity techniques employed by the NOAA ETL MMCR and the PARSL 94 GHz radars located at the CRYSTAL-FACE Eastern & Western Ground Sites, respectively. This technique is most likely to yield improvements for low and midlevel layer clouds that have little thermal variability in cloud height.
NASA Technical Reports Server (NTRS)
2007-01-01
Thick haze collected over the Beijing region in late March 2007. Earlier that month, the BBC News reported that an international team of scientists had documented how increasing pollution in China led to decreasing rainfall over the region. The Moderate Resolution Imaging Spectroradiometer (MODIS) flying onboard the Aqua satellite captured these images of the Beijing region on March 22, 2007. The top image is a 'true-color' picture, similar to a digital photo. The bottom, 'false-color,' image uses a combination of visible and infrared light to more clearly show vegetation, water, and clouds. Even sparse vegetation appears bright green, while water appears deep blue (bright blue when tinged with sediment). Clouds dominated by water droplets appear white, while clouds made of ice crystals appear light blue. The false-color image highlights water bodies, perhaps aqua-culture ponds, that are all but invisible in the true-color image, especially along the shores of the Bo Hai. While vegetation and water show up more clearly in the false-color image, haze is much more transparent. Although dingy gray haze dominates the true-color picture, it is all but invisible in the false-color view. The haze 'disappears' in the infrared-enhanced image because tiny haze particles do not reflect longer-wavelength infrared light very well, making this type of image useful for distinguishing haze from clouds. The bank of clouds in the upper right corner shows up clearly in both pictures. As China industrializes, factories, power plants, and automobiles all contribute to pollution in the region. In examining pollutants and rainfall, the team of scientists examined records covering more than 50 years, concluding that pollution decreased precipitation at Mount Hua near Xi'an in central China. They concluded that when conditions are so hazy that visibility is reduced to less than 8 kilometers (5 miles), hilly precipitation can drop by 30 to 50 percent. When moist air passes over mountains, it usually cools and forms raindrops, but heavy pollutant concentrations cause the clouds to hang on to their moisture.
Cloud Spirals and Outflow in Tropical Storm Katrina
NASA Technical Reports Server (NTRS)
2005-01-01
On Tuesday, August 30, 2005, NASA's Multi-angle Imaging SpectroRadiometer retrieved cloud-top heights and cloud-tracked wind velocities for Tropical Storm Katrina, as the center of the storm was situated over the Tennessee valley. At this time Katrina was weakening and no longer classified as a hurricane, and would soon become an extratropical depression. Measurements such as these can help atmospheric scientists compare results of computer-generated hurricane simulations with observed conditions, ultimately allowing them to better represent and understand physical processes occurring in hurricanes. Because air currents are influenced by the Coriolis force (caused by the rotation of the Earth), Northern Hemisphere hurricanes are characterized by an inward counterclockwise (cyclonic) rotation towards the center. It is less widely known that, at high altitudes, outward-spreading bands of cloud rotate in a clockwise (anticyclonic) direction. The image on the left shows the retrieved cloud-tracked winds as red arrows superimposed across the natural color view from MISR's nadir (vertical-viewing) camera. Both the counter-clockwise motion for the lower-level storm clouds and the clockwise motion for the upper clouds are apparent in these images. The speeds for the clockwise upper level winds have typical values between 40 and 45 m/s (144-162 km/hr). The low level counterclockwise winds have typical values between 7 and 24 m/s (25-86 km/hr), weakening with distance from the storm center. The image on the right displays the cloud-top height retrievals. Areas where cloud heights could not be retrieved are shown in dark gray. Both the wind velocity vectors and the cloud-top height field were produced by automated computer recognition of displacements in spatial features within successive MISR images acquired at different view angles and at slightly different times. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously, viewing the entire globe between 82o north and 82o south latitude every nine days. This image covers an area of about 380 kilometers by 1970 kilometers. These data products were generated from a portion of the imagery acquired during Terra orbit 30324 and utilize data from blocks 55-68 within World Reference System-2 path 22. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is managed for NASA by the California Institute of Technology.Multi-layer Clouds Over the South Indian Ocean
NASA Technical Reports Server (NTRS)
2003-01-01
The complex structure and beauty of polar clouds are highlighted by these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean, to the north of Enderbyland, East Antarctica.The image at left was created by overlying a natural-color view from MISR's downward-pointing (nadir) camera with a color-coded stereo height field. MISR retrieves heights by a pattern recognition algorithm that utilizes multiple view angles to derive cloud height and motion. The opacity of the height field was then reduced until the field appears as a translucent wash over the natural-color image. The resulting purple, cyan and green hues of this aesthetic display indicate low, medium or high altitudes, respectively, with heights ranging from less than 2 kilometers (purple) to about 8 kilometers (green). In the lower right corner, the edge of the Antarctic coastline and some sea ice can be seen through some thin, high cirrus clouds.The right-hand panel is a natural-color image from MISR's 70-degree backward viewing camera. This camera looks backwards along the path of Terra's flight, and in the southern hemisphere the Sun is in front of this camera. This perspective causes the cloud-tops to be brightly outlined by the sun behind them, and enhances the shadows cast by clouds with significant vertical structure. An oblique observation angle also enhances the reflection of light by atmospheric particles, and accentuates the appearance of polar clouds. The dark ocean and sea ice that were apparent through the cirrus clouds at the bottom right corner of the nadir image are overwhelmed by the brightness of these clouds at the oblique view.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17794. The panels cover an area of 335 kilometers x 605 kilometers, and utilize data from blocks 142 to 145 within World Reference System-2 path 155.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Results of ACTIM: an EDA study on spectral laser imaging
NASA Astrophysics Data System (ADS)
Hamoir, Dominique; Hespel, Laurent; Déliot, Philippe; Boucher, Yannick; Steinvall, Ove; Ahlberg, Jörgen; Larsson, Hakan; Letalick, Dietmar; Lutzmann, Peter; Repasi, Endre; Ritt, Gunnar
2011-11-01
The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising military applications, system analyses, a roadmap and recommendations. Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn, active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage, camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result in new capabilities. We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat detection. We present the system analyses that have been performed for confirming the interests, limitations and requirements of spectral active imaging in these three prioritized applications.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Kaufman, Yorman J.
1995-01-01
Using spectral imaging data acquired with the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) from an ER-2 aircraft at 20 km altitude during various field programs, it was found that narrow channels near the center of the strong 1.38-micrometer water vapor band are very effective in detecting think cirrus clouds. Based on this observation from AVIRIS data, Gao and Kaufman proposed to put a channel centered at 1.375 micrometers with a width of 30 nm on the Moderate Resolution Imaging Spectrometer (MODIS) for remote sensing of cirrus clouds from space. The sensitivity of the 1.375-micrometer MODIS channel to detect thin cirrus clouds during the day time is expected to be one to two orders of magnitude better than the current infrared emission techniques. As a result, much larger fraction of the satellite data is expected to be identified as being covered by cirrus clouds, some of them so thin that their obscuration of the surface is very small. In order to make better studies of surface reflectance properties, thin cirrus effects must be removed from satellite images. Therefore, there is a need to study radiative properties of thin cirrus clouds, so that a strategy for correction or removal of the thin cirrus effects, similar to the correction of atmospheric aerosol effect, can be formed. In this extended abstract, we describe an empirical approach for removing/correcting thin cirrus effects in AVIRIS images using channels near 1.375 microns - one step beyond the detection of cirrus clouds using these channels.
Automatic Mosaicking of Satellite Imagery Considering the Clouds
NASA Astrophysics Data System (ADS)
Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang
2016-06-01
With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.
2011-12-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.
Application of Ifsar Technology in Topographic Mapping: JUPEM's Experience
NASA Astrophysics Data System (ADS)
Zakaria, Ahamad
2018-05-01
The application of Interferometric Synthetic Aperture Radar (IFSAR) in topographic mapping has increased during the past decades. This is due to the advantages that IFSAR technology offers in solving data acquisition problems in tropical regions. Unlike aerial photography, radar technology offers wave penetration through cloud cover, fog and haze. As a consequence, images can be made free of any natural phenomenon defects. In Malaysia, Department of Survey and Mapping Malaysia (JUPEM) has been utilizing the IFSAR products since 2009 to update topographic maps at 1 : 50,000 map scales. Orthorectified radar imagery (ORI), Digital Surface Models (DSM) and Digital Terrain Models (DTM) procured under the project have been further processed before the products are ingested into a revamped mapping workflow consisting of stereo and mono digitizing processes. The paper will highlight the experience of Department of Survey and Mapping Malaysia (DSMM)/ JUPEM in using such technology in order to speed up mapping production.
NASA Astrophysics Data System (ADS)
Anzalone, Anna; Isgrò, Francesco
2016-10-01
The JEM-EUSO (Japanese Experiment Module-Extreme Universe Space Observatory) telescope will measure Ultra High Energy Cosmic Ray properties by detecting the UV fluorescent light generated in the interaction between cosmic rays and the atmosphere. Cloud information is crucial for a proper interpretation of these data. The problem of recovering the cloud-top height from satellite images in infrared has struck some attention over the last few decades, as a valuable tool for the atmospheric monitoring. A number of radiative methods do exist, like C02 slicing and Split Window algorithms, using one or more infrared bands. A different way to tackle the problem is, when possible, to exploit the availability of multiple views, and recover the cloud top height through stereo imaging and triangulation. A crucial step in the 3D reconstruction is the process that attempts to match a characteristic point or features selected in one image, with one of those detected in the second image. In this article the performance of a group matching algorithms that include both area-based and global techniques, has been tested. They are applied to stereo pairs of satellite IR images with the final aim of evaluating the cloud top height. Cloudy images from SEVIRI on the geostationary Meteosat Second Generation 9 and 10 (MSG-2, MSG-3) have been selected. After having applied to the cloudy scenes the algorithms for stereo matching, the outcoming maps of disparity are transformed in depth maps according to the geometry of the reference data system. As ground truth we have used the height maps provided by the database of MODIS (Moderate Resolution Imaging Spectroradiometer) on-board Terra/Aqua polar satellites, that contains images quasi-synchronous to the imaging provided by MSG.
NASA Astrophysics Data System (ADS)
NOH, Y. J.; Miller, S. D.; Heidinger, A. K.
2015-12-01
Many studies have demonstrated the utility of multispectral information from satellite passive radiometers for detecting and retrieving the properties of cloud globally, which conventionally utilizes shortwave- and thermal-infrared bands. However, the satellite-derived cloud information comes mainly from cloud top or represents a vertically integrated property. This can produce a large bias in determining cloud phase characteristics, in particular for mixed-phase clouds which are often observed to have supercooled liquid water at cloud top but a predominantly ice phase residing below. The current satellite retrieval algorithms may report these clouds simply as supercooled liquid without any further information regarding the presence of a sub-cloud-top ice phase. More accurate characterization of these clouds is very important for climate models and aviation applications. In this study, we present a physical basis and preliminary results for the algorithm development of supercooled liquid-topped mixed-phase cloud detection using satellite radiometer observations. The detection algorithm is based on differential absorption properties between liquid and ice particles in the shortwave-infrared bands. Solar reflectance data in narrow bands at 1.6 μm and 2.25 μm are used to optically probe below clouds for distinction between supercooled liquid-topped clouds with and without an underlying mixed phase component. Varying solar/sensor geometry and cloud optical properties are also considered. The spectral band combination utilized for the algorithm is currently available on Suomi NPP Visible/Infrared Imaging Radiometer Suite (VIIRS), Himawari-8 Advanced Himawari Imager (AHI), and the future GOES-R Advance Baseline Imager (ABI). When tested on simulated cloud fields from WRF model and synthetic ABI data, favorable results were shown with reasonable threat scores (0.6-0.8) and false alarm rates (0.1-0.2). An ARM/NSA case study applied to VIIRS data also indicated promising potential of the algorithm.
NASA Technical Reports Server (NTRS)
King, Michael D.; Platnick, Steven; Wind, Galina; Arnold, G. Thomas; Dominguez, Roseanne T.
2010-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) Airborne Simulator (MAS) and MODIS/Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Airborne Simulator (MASTER) were used to obtain measurements of the bidirectional reflectance and brightness temperature of clouds at 50 discrete wavelengths between 0.47 and 14.2 microns (12.9 microns for MASTER). These observations were obtained from the NASA ER-2 aircraft as part of the Tropical Composition, Cloud and Climate Coupling (TC4) experiment conducted over Central America and surrounding Pacific and Atlantic Oceans between 17 July and 8 August 2007. Multispectral images in eleven distinct bands were used to derive a confidence in clear sky (or alternatively the probability Of cloud) over land and ocean ecosystems. Based on the results of individual tests run as part of the cloud mask, an algorithm was developed to estimate the phase of the clouds (liquid water, ice, or undetermined phase). The cloud optical thickness and effective radius were derived for both liquid water and ice clouds that were detected during each flight, using a nearly identical algorithm to that implemented operationally to process MODIS Cloud data from the Aqua and Terra satellites (Collection 5). This analysis shows that the cloud mask developed for operational use on MODIS, and tested using MAS and MASTER data in TC(sup 4), is quite capable of distinguishing both liquid water and ice clouds during daytime conditions over both land and ocean. The cloud optical thickness and effective radius retrievals use five distinct bands of the MAS (or MASTER), and these results were compared with nearly simultaneous retrievals of marine liquid water clouds from MODIS on the Terra spacecraft. Finally, this MODIS-based algorithm was adapted to Multiangle Imaging SpectroRadiometer (MISR) data to infer the cloud optical thickness Of liquid water clouds from MISR. Results of this analysis are compared and contrasted.
NASA Technical Reports Server (NTRS)
Short, David A.; Lane, Robert E., Jr.; Winters, Katherine A.; Madura, John T.
2004-01-01
Clouds are highly effective in obscuring optical images of the Space Shuttle taken during its ascent by ground-based and airborne tracking cameras. Because the imagery is used for quick-look and post-flight engineering analysis, the Columbia Accident Investigation Board (CAIB) recommended the return-to-flight effort include an upgrade of the imaging system to enable it to obtain at least three useful views of the Shuttle from lift-off to at least solid rocket booster (SRB) separation (NASA 2003). The lifetimes of individual cloud elements capable of obscuring optical views of the Shuttle are typically 20 minutes or less. Therefore, accurately observing and forecasting cloud obscuration over an extended network of cameras poses an unprecedented challenge for the current state of observational and modeling techniques. In addition, even the best numerical simulations based on real observations will never reach "truth." In order to quantify the risk that clouds would obscure optical imagery of the Shuttle, a 3D model to calculate probabilistic risk was developed. The model was used to estimate the ability of a network of optical imaging cameras to obtain at least N simultaneous views of the Shuttle from lift-off to SRB separation in the presence of an idealized, randomized cloud field.
Earth Observations taken by Expedition 34 crewmember
2013-01-05
ISS034-E-024622 (5 Jan. 2013) --- Polar mesospheric clouds over the South Pacific Ocean are featured in this image photographed by an Expedition 34 crew member on the International Space Station. Polar mesospheric clouds—also known as noctilucent, or “night shining” clouds—are formed 76 to 85 kilometers above Earth’s surface near the mesosphere-thermosphere boundary of the atmosphere, a region known as the mesopause. At these altitudes, water vapor can freeze into clouds of ice crystals. When the sun is below the horizon such that the ground is in darkness, these high clouds may still be illuminated—lending them their ethereal, “night shining” qualities. Noctilucent clouds have been observed from all human vantage points in both the Northern and Southern Hemispheres – from the surface, in aircraft, and in orbit from the space station—and tend to be most visible during the late spring and early summer seasons. Polar mesospheric clouds also are of interest to scientists studying the atmosphere. While some scientists seek to understand their mechanisms of formation, others have identified them as potential indicators of atmospheric changes resulting from increases in greenhouse gas concentrations. This photograph was taken when the station was over the Pacific Ocean south of French Polynesia. While most polar mesospheric cloud images are taken from the orbital complex with relatively short focal length lens to maximize the field of view, this image was taken with a long lens (400 mm) allowing for additional detail of the cloud forms to be seen. Below the brightly-lit noctilucent clouds in the center of the image, the pale orange band indicates the stratosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassianov, Evgueni I.; Riley, Erin A.; Kleiss, Jessica
Cloud amount is an essential and extensively used macrophysical parameter of cumulus clouds. It is commonly defined as a cloud fraction (CF) from zenith-pointing ground-based active and passive remote sensing. However, conventional retrievals of CF from the remote sensing data with very narrow field-of-view (FOV) may not be representative of the surrounding area. Here we assess its representativeness using an integrated dataset collected at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program's Southern Great Plains (SGP) site in Oklahoma, USA. For our assessment with focus on selected days with single-layer cumulus clouds (2005-2016), we include the narrow-FOVmore » ARM Active Remotely Sensed Clouds Locations (ARSCL) and large-FOV Total Sky Imager (TSI) cloud products, the 915-MHz Radar Wind Profiler (RWP) measurements of wind speed and direction, and also high-resolution satellite images from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS). We demonstrate that a root-mean-square difference (RMSD) between the 15-min averaged ARSCL cloud fraction (CF) and the 15-min averaged TSI fractional sky cover (FSC) is large (up to 0.3). We also discuss how the horizontal distribution of clouds can modify the obtained large RMSD using a new uniformity metric. The latter utilizes the spatial distribution of the FSC over the 100° FOV TSI images obtained with high temporal resolution (30 sec sampling). We demonstrate that cases with more uniform spatial distribution of FSC show better agreement between the narrow-FOV CF and large-FOV FSC, reducing the RMSD by up to a factor of 2.« less
WindCam and MSPI: two cloud and aerosol instrument concepts derived from Terra/MISR heritage
NASA Astrophysics Data System (ADS)
Diner, David J.; Mischna, Michael; Chipman, Russell A.; Davis, Ab; Cairns, Brian; Davies, Roger; Kahn, Ralph A.; Muller, Jan-Peter; Torres, Omar
2008-08-01
The Multi-angle Imaging SpectroRadiometer (MISR) has been acquiring global cloud and aerosol data from polar orbit since February 2000. MISR acquires moderately high-resolution imagery at nine view angles from nadir to 70.5°, in four visible/near-infrared spectral bands. Stereoscopic parallax, time lapse among the nine views, and the variation of radiance with angle and wavelength enable retrieval of geometric cloud and aerosol plume heights, height-resolved cloud-tracked winds, and aerosol optical depth and particle property information. Two instrument concepts based upon MISR heritage are in development. The Cloud Motion Vector Camera, or WindCam, is a simplified version comprised of a lightweight, compact, wide-angle camera to acquire multiangle stereo imagery at a single visible wavelength. A constellation of three WindCam instruments in polar Earth orbit would obtain height-resolved cloud-motion winds with daily global coverage, making it a low-cost complement to a spaceborne lidar wind measurement system. The Multiangle SpectroPolarimetric Imager (MSPI) is aimed at aerosol and cloud microphysical properties, and is a candidate for the National Research Council Decadal Survey's Aerosol-Cloud-Ecosystem (ACE) mission. MSPI combines the capabilities of MISR with those of other aerosol sensors, extending the spectral coverage to the ultraviolet and shortwave infrared and incorporating high-accuracy polarimetric imaging. Based on requirements for the nonimaging Aerosol Polarimeter Sensor on NASA's Glory mission, a degree of linear polarization uncertainty of 0.5% is specified within a subset of the MSPI bands. We are developing a polarization imaging approach using photoelastic modulators (PEMs) to accomplish this objective.
Cloud and Radiation Mission with Active and Passive Sensing from the Space Station
NASA Technical Reports Server (NTRS)
Spinhirne, James D.
1998-01-01
A cloud and aerosol radiative forcing and physical process study involving active laser and radar profiling with a combination of passive radiometric sounders and imagers would use the space station as an observation platform. The objectives are to observe the full three dimensional cloud and aerosol structure and the associated physical parameters leading to a complete measurement of radiation forcing processes. The instruments would include specialized radar and lidar for cloud and aerosol profiling, visible, infrared and microwave imaging radiometers with comprehensive channels for cloud and aerosol observation and specialized sounders. The low altitude,. available power and servicing capability of the space station are significant advantages for the active sensors and multiple passive instruments.
Cloud cover detection combining high dynamic range sky images and ceilometer measurements
NASA Astrophysics Data System (ADS)
Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.
2017-11-01
This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.
NASA Astrophysics Data System (ADS)
Lyu, F.; Cummer, S. A.; Weinert, J. L.; McTague, L. E.; Solanki, R.; Barrett, J.
2014-12-01
Lightning processes radiated extremely wideband electromagnetic signals. Lightning images mapped by VHF interferometry and VHF time of arrival lightning mapping arrays enable us to understand the lightning in-cloud detail development during the extent of flash that can not always be captured by cameras because of the shield of cloud. Lightning processes radiate electromagnetically over an extremely wide bandwidth, offering the possibility of multispectral lightning radio imaging. Low frequency signals are often used for lightning detection, but usually only for ground point location or thunderstorm tracking. Some recent results have demonstrated lightning LF 3D mapping of discrete lightning pulses, but imaging of continuous LF emissions have not been shown. In this work, we report a GPS-synchronized LF near field interferometric-TOA 3D lightning mapping array applied to image the development of lightning flashes on second time scale. Cross-correlation, as used in broadband interferometry, is applied in our system to find windowed arrival time differences with sub-microsecond time resolution. However, because the sources are in the near field of the array, time of arrival processing is used to find the source locations with a typical precision of 100 meters. We show that this system images the complete lightning flash structure with thousands of LF sources for extensive flashes. Importantly, this system is able to map both continuous emissions like dart leaders, and bursty or discrete emissions. Lightning stepped leader and dart leader propagation speeds are estimated to 0.56-2.5x105 m/s and 0.8-2.0x106 m/s respectively, which are consistent with previous reports. In many aspects our LF images are remarkably similar to VHF lightning mapping array images, despite the 1000 times difference in frequency, which may suggest some special links between the LF and VHF emission during lightning processes.
Determination of Ice Cloud Models Using MODIS and MISR Data
NASA Technical Reports Server (NTRS)
Xie, Yu; Yang, Ping; Kattawar, George W.; Minnis, Patrick; Hu, Yongxiang; Wu, Dong L.
2012-01-01
Representation of ice clouds in radiative transfer simulations is subject to uncertainties associated with the shapes and sizes of ice crystals within cirrus clouds. In this study, we examined several ice cloud models consisting of smooth, roughened, homogeneous and inhomogeneous hexagonal ice crystals with various aspect ratios. The sensitivity of the bulk scattering properties and solar reflectances of cirrus clouds to specific ice cloud models is investigated using the improved geometric optics method (IGOM) and the discrete ordinates radiative transfer (DISORT) model. The ice crystal habit fractions in the ice cloud model may significantly affect the simulations of cloud reflectances. A new algorithm was developed to help determine an appropriate ice cloud model for application to the satellite-based retrieval of ice cloud properties. The ice cloud particle size retrieved from Moderate Resolution Imaging Spectroradiometer (MODIS) data, collocated with Multi-angle Imaging Spectroradiometer (MISR) observations, is used to infer the optical thicknesses of ice clouds for nine MISR viewing angles. The relative differences between view-dependent cloud optical thickness and the averaged value over the nine MISR viewing angles can vary from -0.5 to 0.5 and are used to evaluate the ice cloud models. In the case for 2 July 2009, the ice cloud model with mixed ice crystal habits is the best fit to the observations (the root mean square (RMS) error of cloud optical thickness reaches 0.365). This ice cloud model also produces consistent cloud property retrievals for the nine MISR viewing configurations within the measurement uncertainties.
NASA Astrophysics Data System (ADS)
Antioquia, C. T.; Uy, S. N.; Caballa, K.; Lagrosas, N.
2014-12-01
Ground based sky imaging cameras have been used to measure cloud cover over an area to aid in radiation budget models. During daytime, certain clouds tend to help decrease atmospheric temperature by obstructing sunrays in the atmosphere. Thus, the detection of clouds plays an important role in the formulation of radiation budget in the atmosphere. In this study, a wide angled sky imager (GoPro Hero 2) was brought on board M/Y Vasco to detect and quantity cloud occurrence over sea during the 2nd 7SEAS field campaign. The camera is just a part of a number of scientific instruments used to measure weather, aerosol chemistry and solar radiation among others. The data collection started during the departure from Manila Bay on 05 September 2012 and went on until the end of the cruise (29 September 2012). The camera was placed in a weather-proof box that is then affixed on a steel mast where other instruments are also attached during the cruise. The data has a temporal resolution of 1 minute, and each image is 500x666 pixels in size. Fig. 1a shows the track of the ship during the cruise. The red, blue, hue, saturation, and value of the pixels are analysed for cloud occurrence. A pixel is considered to "contain" thick cloud if it passes all four threshold parameters (R-B, R/B, R-B/R+B, HSV; R is the red pixel color value, blue is the blue pixel color value, and HSV is the hue saturation value of the pixel) and considered thin cloud if it passes two or three parameters. Fig. 1b shows the daily analysis of cloud occurrence. Cloud occurrence here is quantified as the ratio of the pixels with cloud to the total number of pixels in the data image. The average cloud cover for the days included in this dataset is 87%. These measurements show a big contrast when compared to cloud cover over land (Manila Observatory) which is usually around 67%. During the duration of the cruise, only one day (September 6) has an average cloud occurrence below 50%; the rest of the days have averages of 66% or higher - 98% being the highest. This result would then give a general trend of how cloud occurrences over land and over sea differ in the South East Asian region. In this study, these cloud occurrences come from local convection and clouds brought about by Southwest Monsoon winds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleissl, J.; Urquhart, B.; Ghonima, M.
During the University of California, San Diego (UCSD) Sky Imager Cloud Position Study, two University of California, San Diego Sky Imagers (USI) (Figure 1) were deployed the U.S. Department of Energy(DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains SGP) research facility. The UCSD Sky Imagers were placed 1.7 km apart to allow for stereographic determination of the cloud height for clouds over approximately 1.5 km. Images with a 180-degree field of view were captured from both systems during daylight hours every 30 seconds beginning on March 11, 2013 and ending on November 4, 2013. The spatial resolutionmore » of the images was 1,748 × 1,748, and the intensity resolution was 16 bits using a high-dynamic-range capture process. The cameras use a fisheye lens, so the images are distorted following an equisolid angle projection.« less
NASA Astrophysics Data System (ADS)
Zhang, T.; Lei, B.; Hu, Y.; Liu, K.; Gan, Y.
2018-04-01
Optical remote sensing images have been widely used in feature interpretation and geo-information extraction. All the fundamental applications of optical remote sensing, are greatly influenced by cloud coverage. Generally, the availability of cloudless images depends on the meteorological conditions for a given area. In this study, the cloud total amount (CTA) products of the Fengyun (FY) satellite were introduced to explore the meteorological changes in a year over China. The cloud information of CTA products were tested by using ZY-3 satellite images firstly. CTA products from 2006 to 2017 were used to get relatively reliable results. The window period of cloudless images acquisition for different areas in China was then determined. This research provides a feasible way to get the cloudless images acquisition window by using meteorological observations.
Secure image retrieval with multiple keys
NASA Astrophysics Data System (ADS)
Liang, Haihua; Zhang, Xinpeng; Wei, Qiuhan; Cheng, Hang
2018-03-01
This article proposes a secure image retrieval scheme under a multiuser scenario. In this scheme, the owner first encrypts and uploads images and their corresponding features to the cloud; then, the user submits the encrypted feature of the query image to the cloud; next, the cloud compares the encrypted features and returns encrypted images with similar content to the user. To find the nearest neighbor in the encrypted features, an encryption with multiple keys is proposed, in which the query feature of each user is encrypted by his/her own key. To improve the key security and space utilization, global optimization and Gaussian distribution are, respectively, employed to generate multiple keys. The experiments show that the proposed encryption can provide effective and secure image retrieval for each user and ensure confidentiality of the query feature of each user.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
The interpretation of remotely sensed cloud properties from a model paramterization perspective
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Wielicki, Bruce A.; Ginger, Kathryn M.
1994-01-01
A study has been made of the relationship between mean cloud radiative properties and cloud fraction in stratocumulus cloud systems. The analysis is of several Land Resources Satellite System (LANDSAT) images and three hourly International Satellite Cloud Climatology Project (ISCCP) C-1 data during daylight hours for two grid boxes covering an area typical of a general circulation model (GCM) grid increment. Cloud properties were inferred from the LANDSAT images using two thresholds and several pixel resolutions ranging from roughly 0.0625 km to 8 km. At the finest resolution, the analysis shows that mean cloud optical depth (or liquid water path) increases somewhat with increasing cloud fraction up to 20% cloud coverage. More striking, however, is the lack of correlation between the two quantities for cloud fractions between roughly 0.2 and 0.8. When the scene is essentially overcast, the mean cloud optical tends to be higher. Coarse resolution LANDSAT analysis and the ISCCP 8-km data show lack of correlation between mean cloud optical depth and cloud fraction for coverage less than about 90%. This study shows that there is perhaps a local mean liquid water path (LWP) associated with partly cloudy areas of stratocumulus clouds. A method has been suggested to use this property to construct the cloud fraction paramterization in a GCM when the model computes a grid-box-mean LWP.
MONET: multidimensional radiative cloud scene model
NASA Astrophysics Data System (ADS)
Chervet, Patrick
1999-12-01
All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.
Risk and reward in the cloud. Choosing a cloud vendor involves weighing risks versus benefits.
Degaspari, John
2012-05-01
More hospitals are looking to the cloud as a viable way to store clinical, imaging, and financial data. Experts acknowledge its advantages, but caution it's a step that requires careful planning and vetting of potential cloud vendors.
Use of MODIS Cloud Top Pressure to Improve Assimilation Yields of AIRS Radiances in GSI
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi
2014-01-01
Radiances from hyperspectral sounders such as the Atmospheric Infrared Sounder (AIRS) are routinely assimilated both globally and regionally in operational numerical weather prediction (NWP) systems using the Gridpoint Statistical Interpolation (GSI) data assimilation system. However, only thinned, cloud-free radiances from a 281-channel subset are used, so the overall percentage of these observations that are assimilated is somewhere on the order of 5%. Cloud checks are performed within GSI to determine which channels peak above cloud top; inaccuracies may lead to less assimilated radiances or introduction of biases from cloud-contaminated radiances.Relatively large footprint from AIRS may not optimally represent small-scale cloud features that might be better resolved by higher-resolution imagers like the Moderate Resolution Imaging Spectroradiometer (MODIS). Objective of this project is to "swap" the MODIS-derived cloud top pressure (CTP) for that designated by the AIRS-only quality control within GSI to test the hypothesis that better representation of cloud features will result in higher assimilated radiance yields and improved forecasts.
2017-11-30
See Jovian clouds in striking shades of blue in this new view taken by NASA's Juno spacecraft. The Juno spacecraft captured this image when the spacecraft was only 11,747 miles (18,906 kilometers) from the tops of Jupiter's clouds -- that's roughly as far as the distance between New York City and Perth, Australia. The color-enhanced image, which captures a cloud system in Jupiter's northern hemisphere, was taken on Oct. 24, 2017 at 10:24 a.m. PDT (1:24 p.m. EDT) when Juno was at a latitude of 57.57 degrees (nearly three-fifths of the way from Jupiter's equator to its north pole) and performing its ninth close flyby of the gas giant planet. The spatial scale in this image is 7.75 miles/pixel (12.5 kilometers/pixel). Because of the Juno-Jupiter-Sun angle when the spacecraft captured this image, the higher-altitude clouds can be seen casting shadows on their surroundings. The behavior is most easily observable in the whitest regions in the image, but also in a few isolated spots in both the bottom and right areas of the image. Citizen scientists Gerald Eichstädt and Seán Doran processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21972
NASA Astrophysics Data System (ADS)
Bailey, T. L.; Sutherland-Montoya, D.
2015-12-01
High resolution topographic analysis methods have become important tools in geomorphology. Structure from Motion photogrammetry offers a compelling vehicle for geomorphic change detection in fluvial environments. This process can produce arbitrarily high resolution, geographically registered spectral and topographic coverages from a collection of overlapping digital imagery from consumer cameras. Cuneo Creek has had three historically observed episodes of rapid aggradation (1955, 1964, and 1997). The debris flow deposits continue to be major sources of sediment sixty years after the initial slope failure. Previous studies have monitored the sediment storage volume and particle size since 1976 (in 1976, 1982, 1983, 1985, 1986, 1987, 1998, 2003). We reoccupied 3 previously surveyed stream cross sections on Sept 30, 2014 and March 30, 2015, and produced photogrammetric point clouds using a pole mounted camera with a remote view finder to take nadir view images from 4.3 meters above the channel bed. Ground control points were registered using survey grade GPS and typical cross sections used over 100 images to build the structure model. This process simultaneously collects channel geometry and we used it to also generate surface texture metrics, and produced DEMs with point cloud densities above 5000 points / m2. In the period between the surveys, a five year recurrence interval discharge of 20 m3/s scoured the channel. Surface particle size distribution has been determined for each observation period using image segmentation algorithms based on spectral distance and compactness. Topographic differencing between the point clouds shows substantial channel bed mobilization and reorganization. The net decline in sediment storage is in excess of 4 x 10^5 cubic meters since the 1964 aggradation peak, with associated coarsening of surface particle sizes. These new methods provide a promising rapid assessment tool for measurement of channel responses to sediment inputs.
Extending MODIS Cloud Top and Infrared Phase Climate Records with VIIRS and CrIS
NASA Astrophysics Data System (ADS)
Heidinger, A. K.; Platnick, S. E.; Ackerman, S. A.; Holz, R.; Meyer, K.; Frey, R.; Wind, G.; Li, Y.; Botambekov, D.
2015-12-01
The MODIS imagers on the NASA EOS Terra and Aqua satellites have generated accurate and well-used cloud climate data records for 15 years. Both missions are expected to continue until the end of this decade and perhaps beyond. The Visible and Infrared Imaging Radiometer Suite (VIIRS) imagers on the Suomi-NPP (SNPP) mission (launched in October 2011) and future NOAA Joint Polar Satellite System (JPSS) platforms are the successors for imager-based cloud climate records from polar orbiting satellites after MODIS. To ensure product continuity across a broad suite of EOS products, NASA has funded a SNPP science team to develop EOS-like algorithms that can be use with SNPP and JPSS observations, including two teams to work on cloud products. Cloud data record continuity between MODIS and VIIRS is particularly challenging due to the lack of VIIRS CO2-slicing channels, which reduces information content for cloud detection and cloud-top property products, as well as down-stream cloud optical products that rely on both. Here we report on our approach to providing continuity specifically for the MODIS/VIIRS cloud-top and infrared-derived thermodynamic phase products by combining elements of the NASA MODIS science team (MOD) and the NOAA Algorithm Working Group (AWG) algorithms. The combined approach is referred to as the MODAWG processing package. In collaboration with the NASA Atmospheric SIPS located at the University of Wisconsin Space Science and Engineering Center, the MODAWG code has been exercised on one year of SNPP VIIRS data. In addition to cloud-top and phase, MODAWG provides a full suite of cloud products that are physically consistent with MODIS and have a similar data format. Further, the SIPS has developed tools to allow use of Cross-track Infrared Sounder (CrIS) observations in the MODAWG processing that can ameliorate the loss of the CO2 absorption channels on VIIRS. Examples will be given that demonstrate the positive impact that the CrIS data can provide when combined with VIIRS for cloud height and IR-phase retrievals.
Wave clouds over the Central African Republic
2016-02-04
On January 27, 2016, the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA’s Aqua satellite passed over the Central African Republic and captured a true-color image of wave clouds rippling over a fire-speckled landscape. Wave clouds typically form when a mountain, island, or even another mass of air forces an air mass to rise, then fall again, in a wave pattern. The air cools as it rises, and if there is moisture in the air, the water condenses into clouds at the top of the wave. As the air begins to sink, the air warms and the cloud dissipates. The result is a line of clouds marking the crests of the wave separated by clear areas in the troughs of the wave. In addition to the long lines of clouds stretching across the central section of the country, clouds appear to line up in parallel rows near the border of the Democratic Republic of the Congo. In this area, small sets of grayish cloud appear to be lined up with the prevailing wind, judging by the plumes of smoke rising from red hotspots near each set of clouds. Clouds like this, that line in parallel rows parallel with the prevailing wind, are known as “cloud streets”. Each red “hotspot” marks an area where the thermal sensors on the MODIS instrument detected high temperatures. When accompanied by typical smoke, such hotspots are diagnostic for actively burning fires. Given the time of the year, the widespread nature, and the location of the fires, they are almost certainly agricultural fires that have been deliberately set to manage land. Image Credit: Jeff Schmaltz, MODIS Land Rapid Response Team, NASA GSFC NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Images from Galileo of the Venus cloud deck
Belton, M.J.S.; Gierasch, P.J.; Smith, M.D.; Helfenstein, P.; Schinder, P.J.; Pollack, James B.; Rages, K.A.; Ingersoll, A.P.; Klaasen, K.P.; Veverka, J.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Fanale, F.P.; Greeley, R.; Greenberg, R.; Head, J. W.; Morrison, D.; Neukum, G.; Pilcher, C.B.
1991-01-01
Images of Venus taken at 418 (violet) and 986 [near-infrared (NIR)] nanometers show that the morphology and motions of large-scale features change with depth in the cloud deck. Poleward meridional velocities, seen in both spectral regions, are much reduced in the NIR. In the south polar region the markings in the two wavelength bands are strongly anticorrelated. The images follow the changing state of the upper cloud layer downwind of the subsolar point, and the zonal flow field shows a longitudinal periodicity that may be coupled to the formation of large-scale planetary waves. No optical lightning was detected.
Study of the thermodynamic phase of hydrometeors in convective clouds in the Amazon Basin
NASA Astrophysics Data System (ADS)
Ferreira, W. C.; Correia, A. L.; Martins, J.
2012-12-01
Aerosol-cloud interactions are responsible for large uncertainties in climatic models. One key fator when studying clouds perturbed by aerosols is determining the thermodynamic phase of hydrometeors as a function of temperature or height in the cloud. Conventional remote sensing can provide information on the thermodynamic phase of clouds over large areas, but it lacks the precision needed to understand how a single, real cloud evolves. Here we present mappings of the thermodynamic phase of droplets and ice particles in individual convective clouds in the Amazon Basin, by analyzing the emerging infrared radiance on cloud sides (Martins et al., 2011). In flights over the Amazon Basin with a research aircraft Martins et al. (2011) used imaging radiometers with spectral filters to record the emerging radiance on cloud sides at the wavelengths of 2.10 and 2.25 μm. Due to differential absorption and scattering of these wavelengths by hydrometeors in liquid or solid phases, the intensity ratio between images recorded at the two wavelengths can be used as proxy to the thermodynamic phase of these hydrometeors. In order to analyze the acquired dataset we used the MATLAB tools package, developing scripts to handle data files and derive the thermodynamic phase. In some cases parallax effects due to aircraft movement required additional data processing before calculating ratios. Only well illuminated scenes were considered, i.e. images acquired as close as possible to the backscatter vector from the incident solar radiation. It's important to notice that the intensity ratio values corresponding to a given thermodynamic phase can vary from cloud to cloud (Martins et al., 2011), however inside the same cloud the distinction between ice, water and mixed-phase is clear. Analyzing histograms of reflectance ratios 2.10/2.25 μm in selected cases, we found averages typically between 0.3 and 0.4 for ice phase hydrometeors, and between 0.5 and 0.7 for water phase droplets, consistent with the findings in Martins et al., (2011). Figure 1 shows an example of thermodynamic phase classification obtained with this technique. These experimental results can potentially be used in fast derivations of thermodynamic phase mappings in deep convective clouds, providing useful information for studies regarding aerosol-cloud interactions. Image of the ratio of reflectances at 2.10/2.25μm
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA02171 Cloud Front These clouds formed in the south polar region. The faintness of the cloud system likely indicates that these are mainly ice clouds, with relatively little dust content. Image information: VIS instrument. Latitude -86.7N, Longitude 212.3E. 17 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Astrophysics Data System (ADS)
Garland, Justin; Sayanagi, Kunio M.; Blalock, John J.; Gunnarson, Jacob; McCabe, Ryan M.; Gallego, Angelina; Hansen, Candice; Orton, Glenn S.
2017-10-01
We present an analysis of the spatial-scales contained in the cloud morphology of Jupiter’s southern high latitudes using images captured by JunoCam in 2016 and 2017, and compare them to those on Saturn using images captured using the Imaging Science Subsystem (ISS) on board the Cassini orbiter. For Jupiter, the characteristic spatial scale of cloud morphology as a function of latitude is calculated from images taken in three visual (600-800, 500-600, 420-520 nm) bands and a near-infrared (880- 900 nm) band. In particular, we analyze the transition from the banded structure characteristic of Jupiter’s mid-latitudes to the chaotic structure of the polar region. We apply similar analysis to Saturn using images captured using Cassini ISS. In contrast to Jupiter, Saturn maintains its zonally organized cloud morphology from low latitudes up to the poles, culminating in the cyclonic polar vortices centered at each of the poles. By quantifying the differences in the spatial scales contained in the cloud morphology, our analysis will shed light on the processes that control the banded structures on Jupiter and Saturn. Our work has been supported by the following grants: NASA PATM NNX14AK07G, NASA MUREP NNX15AQ03A, and NSF AAG 1212216.
Cloud computing and patient engagement: leveraging available technology.
Noblin, Alice; Cortelyou-Ward, Kendall; Servan, Rosa M
2014-01-01
Cloud computing technology has the potential to transform medical practices and improve patient engagement and quality of care. However, issues such as privacy and security and "fit" can make incorporation of the cloud an intimidating decision for many physicians. This article summarizes the four most common types of clouds and discusses their ideal uses, how they engage patients, and how they improve the quality of care offered. This technology also can be used to meet Meaningful Use requirements 1 and 2; and, if speculation is correct, the cloud will provide the necessary support needed for Meaningful Use 3 as well.
Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing
NASA Technical Reports Server (NTRS)
Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane
2012-01-01
Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.
1988-08-08
A recent Hubble Space Telescope (HST) view reveals Uranus surrounded by its 4 major rings and 10 of its 17 known satellites. This false color image was generated by Erich Karoschka using data taken with Hubble's Near Infrared Camera and Multi-Object Spectrometer. The HST recently found about 20 clouds. The colors in the image indicate altitude. The green and blue regions show where the atmosphere is clear and can be penetrated by sunlight. In yellow and grey regions, the sunlight reflects from a higher haze or cloud layer. The orange and red colors indicate very high clouds, such as cirrus clouds on Earth.
NASA Technical Reports Server (NTRS)
2002-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1: AIRS channel 2333 (2616 cm-1)Figure 2: HSB channel 2 (150 GHz) Three different Views of Hurricane Isidore from the Atmospheric Infrared Sounding System (AIRS) on Aqua. At the time Aqua passed over Isidore, it was classified as a Category 3 (possibly 4) hurricane, with minimum pressure of 934 mbar, maximum sustained wind speeds of 110 knots (gusting to 135) and an eye diameter of 20 nautical miles. Isidore was later downgraded to a Tropical Storm before gathering strength again. This is a visible/near-infrared image, made with the AIRS instrument. Its 2 km resolution shows fine details of the cloud structure, and can be used to help interpret the other images. For example, some relatively cloud-free regions in the eye of the hurricane can be distinguished. This image was made with wavelengths slightly different than those seen by the human eye, causing plants to appear very red. Figure 1 shows high and cold clouds in blue. Figure 2 shows heavy rain cells over Alabama in blue. This image shows the swirling clouds in white and the water of the Gulf of Mexico in blue. The eye of the hurricane is apparent in all three images. Figure 1 shows how the hurricane looks through an AIRS Infrared window channel. Window channels measure the temperature of the cloud tops or the surface of the Earth in clear regions. The lowest temperatures are over Alabama and are associated with high, cold cloud tops at the end of the cloud band streaming from the hurricane. Although the eye is visible, it does not appear to be completely cloud free. Figure 2 shows the hurricane as seen through a microwave channel of the Humidity Sounder for Brazil (HSB). This channel is sensitive to humidity, clouds and rain. Unlike the AIRS infrared channel, it can penetrate through cloud layers and therefore reveals some of the internal structure of the hurricane. In this image, the green and yellow colors indicate clouds and heavy moisture, while blue indicates scattering by precipitation in intense convection. Orange indicates warm, moist air near the surface. The ocean surface, could it be seen, would appear slightly colder (yellow to green) due to the relatively low emissivity of water. Three sets of eye walls are apparent, and a number of intense convective cells can also be distinguished. In the near future, weather data derived from these images will allow us to improve our forecasts and track the paths of hurricanes more accurately. The AIRS sounding system provides 2400 such images, or channels, continuously. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.Morgan, Matthew B; Young, Elizabeth; Harada, Scott; Winkler, Nicole; Riegert, Joanna; Jones, Tony; Hu, Nan; Stein, Matthew
2017-12-01
In screening mammography, accessing prior examination images is crucial for accurate diagnosis and avoiding false-positives. When women visit multiple institutions for their screens, these "outside" examinations must be retrieved for comparison. Traditionally, prior images are obtained by faxing requests to other institutions and waiting for standard mail (film or CD-ROM), which can greatly delay report turnaround times. Recently, advancements in cloud-based image transfer technology have opened up more efficient options for examination transfer between institutions. The objective of this study was to evaluate the effect of cloud-based image transfer on mammography department workflow, time required to obtain prior images, and report turnaround times. Sixty screening examinations requiring prior images were placed into two groups (30 each). The control group used the standard institutional protocol for requesting prior images: faxing requests and waiting for mailed examinations. The experimental group used a cloud-based transfer for both requesting and receiving examinations. The mean number of days between examination request and examination receipt was measured for both groups and compared. The mean number of days from examination request to receipt was 6.08 days (SD 3.50) in the control group compared with 3.16 days (SD 3.95) in the experimental group. Using a cloud-based image transfer to obtain prior mammograms resulted in an average reduction of 2.92 days (P = .0361; 95% confidence interval 0.20-5.65) between examination request and receipt. This improvement in system efficiency is relevant for interpreting radiologists working to improve reporting times and for patients anxious to receive their mammography results. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Diurnal, Seasonal, and Interannual Variations of Cloud Properties Derived for CERES From Imager Data
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Young, David F.; Sun-Mack, Sunny; Trepte, Qing Z.; Chen, Yan; Brown, Richard R.; Gibson, Sharon; Heck, Patrick W.
2004-01-01
Simultaneous measurement of the radiation and cloud fields on a global basis is a key component in the effort to understand and model the interaction between clouds and radiation at the top of the atmosphere, at the surface, and within the atmosphere. The NASA Clouds and Earth s Radiant Energy System (CERES) Project, begun in 1998, is meeting this need. Broadband shortwave (SW) and longwave radiance measurements taken by the CERES scanners at resolutions between 10 and 20 km on the Tropical Rainfall Measuring Mission (TRMM), Terra, and Aqua satellites are matched to simultaneous retrievals of cloud height, phase, particle size, water path, and optical depth OD from the TRMM Visible Infrared Scanner (VIRS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua. Besides aiding the interpretation of the broadband radiances, the CERES cloud properties are valuable for understanding cloud variations at a variety of scales. In this paper, the resulting CERES cloud data taken to date are averaged at several temporal scales to examine the temporal and spatial variability of the cloud properties on a global scale at a 1 resolution.
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Cahalan, Robert F.; Remer, Lorraine A.; Kleidman, Richard G.
2007-01-01
3D aerosol-cloud interaction is examined by analyzing two images containing cumulus clouds in biomass burning regions in Brazil. The research consists of two parts. The first part focuses on identifying 3D clo ud impacts on the reflectance of pixel selected for the MODIS aerosol retrieval based purely on observations. The second part of the resea rch combines the observations with radiative transfer computations to identify key parameters in 3D aerosol-cloud interaction. We found that 3D cloud-induced enhancement depends on optical properties of nearb y clouds as well as wavelength. The enhancement is too large to be ig nored. Associated biased error in 1D aerosol optical thickness retrie val ranges from 50% to 140% depending on wavelength and optical prope rties of nearby clouds as well as aerosol optical thickness. We caution the community to be prudent when applying 1D approximations in comp uting solar radiation in dear regions adjacent to clouds or when usin g traditional retrieved aerosol optical thickness in aerosol indirect effect research.
NASA Technical Reports Server (NTRS)
King, Michael D.; Platnick, S.; Gray, M. A.; Hubanks, P. A.
2004-01-01
The Moderate Resolution Imaging Spectroradiometer (MODE) was developed by NASA and launched onboard the Terra spacecraft on December 18,1999 and the Aqua spacecraft on April 26,2002. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from each polar-orbiting, sun-synchronous, platform at an altitude of 705 km, and provides images in 36 spectral bands between 0.415 and 14.235 pm with spatial resolutions of 250 m (2 bands), 500 m (5 bands) and 1000 m (29 bands). In this paper, we describe the radiative properties of clouds as currently determined from satellites (cloud fraction, optical thickness, cloud top pressure, and cloud effective radius), and highlight the global and regional cloud microphysical properties currently available for assessing climate variability and forcing. These include the latitudinal distribution of cloud optical and radiative properties of both liquid water and ice clouds, as well as joint histograms of cloud optical thickness and effective radius for selected geographical locations around the globe.
NASA Astrophysics Data System (ADS)
Skemer, Andrew J.; Hinz, Philip M.; Esposito, Simone; Burrows, Adam; Leisenring, Jarron; Skrutskie, Michael; Desidera, Silvano; Mesa, Dino; Arcidiacono, Carmelo; Mannucci, Filippo; Rodigas, Timothy J.; Close, Laird; McCarthy, Don; Kulesa, Craig; Agapito, Guido; Apai, Daniel; Argomedo, Javier; Bailey, Vanessa; Boutsia, Konstantina; Briguglio, Runa; Brusa, Guido; Busoni, Lorenzo; Claudi, Riccardo; Eisner, Joshua; Fini, Luca; Follette, Katherine B.; Garnavich, Peter; Gratton, Raffaele; Guerra, Juan Carlos; Hill, John M.; Hoffmann, William F.; Jones, Terry; Krejny, Megan; Males, Jared; Masciadri, Elena; Meyer, Michael R.; Miller, Douglas L.; Morzinski, Katie; Nelson, Matthew; Pinna, Enrico; Puglisi, Alfio; Quanz, Sascha P.; Quiros-Pacheco, Fernando; Riccardi, Armando; Stefanini, Paolo; Vaitheeswaran, Vidhya; Wilson, John C.; Xompero, Marco
2012-07-01
As the only directly imaged multiple planet system, HR 8799 provides a unique opportunity to study the physical properties of several planets in parallel. In this paper, we image all four of the HR 8799 planets at H band and 3.3 μm with the new Large Binocular Telescope adaptive optics system, PISCES, and LBTI/LMIRCam. Our images offer an unprecedented view of the system, allowing us to obtain H and 3.3 μm photometry of the innermost planet (for the first time) and put strong upper limits on the presence of a hypothetical fifth companion. We find that all four planets are unexpectedly bright at 3.3 μm compared to the equilibrium chemistry models used for field brown dwarfs, which predict that planets should be faint at 3.3 μm due to CH4 opacity. We attempt to model the planets with thick-cloudy, non-equilibrium chemistry atmospheres but find that removing CH4 to fit the 3.3 μm photometry increases the predicted L' (3.8 μm) flux enough that it is inconsistent with observations. In an effort to fit the spectral energy distribution of the HR 8799 planets, we construct mixtures of cloudy atmospheres, which are intended to represent planets covered by clouds of varying opacity. In this scenario, regions with low opacity look hot and bright, while regions with high opacity look faint, similar to the patchy cloud structures on Jupiter and L/T transition brown dwarfs. Our mixed-cloud models reproduce all of the available data, but self-consistent models are still necessary to demonstrate their viability. The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are as follows: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di AstroÞsica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota, and University of Virginia.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Cloud Properties of CERES-MODIS Edition 4 and CERES-VIIRS Edition 1
NASA Technical Reports Server (NTRS)
Sun-Mack, Sunny; Minnis, Patrick; Chang, Fu-Lung; Hong, Gang; Arduini, Robert; Chen, Yan; Trepte, Qing; Yost, Chris; Smith, Rita; Brown, Ricky;
2015-01-01
The Clouds and Earth's Radiant Energy System (CERES) analyzes MODerate-resolution Imaging Spectroradiometer (MODIS) data and Visible Infrared Imaging Radiometer Suite (VIIRS) to derive cloud properties that are combine with aerosol and CERES broadband flux data to create a multi-parameter data set for climate study. CERES has produced over 15 years of data from Terra and over 13 years of data from Aqua using the CERES-MODIS Edition-2 cloud retrieval algorithm. A recently revised algorithm, CERESMODIS Edition 4, has been developed and is now generating enhanced cloud data for climate research (over 10 years for Terra and 8 years for Aqua). New multispectral retrievals of properties are included along with a multilayer cloud retrieval system. Cloud microphysical properties are reported at 3 wavelengths, 0.65, 1.24, and 2.1 microns to enable better estimates of the vertical profiles of cloud water contents. Cloud properties over snow are retrieved using the 1.24-micron channel. A new CERES-VIIRS cloud retrieval package was developed for the VIIRS spectral complement and is currently producing the CERES-VIIRS Edition 1 cloud dataset. The results from CERES-MODIS Edition 4 and CERES-VIIRS Edition 1 are presented and compared with each other and other datasets, including CALIPSO, CloudSat and the CERES-MODIS Edition-2 results.
Puckett, Yana; To, Alvin
2016-01-01
This study examines the inefficiencies of radiologic imaging transfers from one hospital to the other during pediatric trauma transfers in an era of cloud based information sharing. Retrospective review of all patients transferred to a pediatric trauma center from 2008-2014 was performed. Imaging was reviewed for whether imaging accompanied the patient, whether imaging was able to be uploaded onto computer for records, whether imaging had to be repeated, and whether imaging obtained at outside hospitals (OSH) was done per universal pediatric trauma guidelines. Of the 1761 patients retrospectively reviewed, 559 met our inclusion criteria. Imaging was sent with the patient 87.7% of the time. Imaging was unable to be uploaded 31.9% of the time. CT imaging had to be repeated 1.8% of the time. CT scan was not done per universal pediatric trauma guidelines 1.2% of the time. Our study demonstrated that current imaging transfer is inefficient, leads to excess ionizing radiation, and increased healthcare costs. Universal implementation of cloud based radiology has the potential to eliminate excess ionizing radiation to children, improve patient care, and save cost to healthcare system.
Venus winds at cloud level from VIRTIS during the Venus Express mission
NASA Astrophysics Data System (ADS)
Hueso, Ricardo; Peralta, Javier; Sánchez-Lavega, Agustín.; Pérez-Hoyos, Santiago; Piccioni, Giuseppe; Drossart, Pierre
2010-05-01
The Venus Express (VEX) mission has been in orbit to Venus for almost four years now. The VIRTIS instrument onboard VEX observes Venus in two channels (visible and infrared) obtaining spectra and multi-wavelength images of the planet. Images in the ultraviolet range are used to study the upper cloud at 66 km while images in the infrared (1.74 μm) map the opacity of the lower cloud deck at 48 km. Here we present our latest results on the analysis of the global atmospheric dynamics at these cloud levels using a large selection over the full VIRTIS dataset. We will show the atmospheric zonal superrotation at these levels and the mean meridional motions. The zonal winds are very stable in the lower cloud at mid-latitudes to the tropics while it shows different signatures of variability in the upper cloud where solar tide effects are manifest in the data. While the upper clouds present a net meridional motion consistent with the upper branch of a Hadley cell the lower cloud present almost null global meridional motions at all latitudes but with particular features traveling both northwards and southwards in a turbulent manner depending on the cloud morphology on the observations. A particular important atmospheric feature is the South Polar vortex which might be influencing the structure of the zonal winds in the lower cloud at latitudes from the vortex location up to 55°S. Acknowledgements This work has been funded by the Spanish MICIIN AYA2009-10701 with FEDER support and Grupos Gobierno Vasco IT-464-07.
Remote Sensing of Clouds for Solar Forecasting Applications
NASA Astrophysics Data System (ADS)
Mejia, Felipe
A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.
Multilayer Cloud Detection with the MODIS Near-Infrared Water Vapor Absorption Band
NASA Technical Reports Server (NTRS)
Wind, Galina; Platnick, Steven; King, Michael D.; Hubanks, Paul A,; Pavolonis, Michael J.; Heidinger, Andrew K.; Yang, Ping; Baum, Bryan A.
2009-01-01
Data Collection 5 processing for the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the NASA Earth Observing System EOS Terra and Aqua spacecraft includes an algorithm for detecting multilayered clouds in daytime. The main objective of this algorithm is to detect multilayered cloud scenes, specifically optically thin ice cloud overlying a lower-level water cloud, that presents difficulties for retrieving cloud effective radius using single layer plane-parallel cloud models. The algorithm uses the MODIS 0.94 micron water vapor band along with CO2 bands to obtain two above-cloud precipitable water retrievals, the difference of which, in conjunction with additional tests, provides a map of where multilayered clouds might potentially exist. The presence of a multilayered cloud results in a large difference in retrievals of above-cloud properties between the CO2 and the 0.94 micron methods. In this paper the MODIS multilayered cloud algorithm is described, results of using the algorithm over example scenes are shown, and global statistics for multilayered clouds as observed by MODIS are discussed. A theoretical study of the algorithm behavior for simulated multilayered clouds is also given. Results are compared to two other comparable passive imager methods. A set of standard cloudy atmospheric profiles developed during the course of this investigation is also presented. The results lead to the conclusion that the MODIS multilayer cloud detection algorithm has some skill in identifying multilayered clouds with different thermodynamic phases
Satellite images to aircraft in flight. [GEOS image transmission feasibility analysis
NASA Technical Reports Server (NTRS)
Camp, D.; Luers, J. K.; Kadlec, P. W.
1977-01-01
A study has been initiated to evaluate the feasibility of transmitting selected GOES images to aircraft in flight. Pertinent observations that could be made from satellite images on board aircraft include jet stream activity, cloud/wind motion, cloud temperatures, tropical storm activity, and location of severe weather. The basic features of the Satellite Aircraft Flight Environment System (SAFES) are described. This system uses East GOES and West GOES satellite images, which are interpreted, enhanced, and then retransmitted to designated aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.
Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
NASA Astrophysics Data System (ADS)
Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.
2012-07-01
Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but in addition with radiometric information for textures. The discussion in this paper reviews recording and important processing steps as geo-referencing and data merging, the essential assessment of the results, and examples of deliverables from projects of the Photogrammetry and Geomatics Group (INSA Strasbourg, France).
NASA Astrophysics Data System (ADS)
Evrard, Rebecca L.; Ding, Yifeng
2018-01-01
Clouds play a large role in the Earth's global energy budget, but the impact of cirrus clouds is still widely questioned and researched. Cirrus clouds reside high in the atmosphere and due to cold temperatures are comprised of ice crystals. Gaining a better understanding of ice cloud optical properties and the distribution of cirrus clouds provides an explanation for the contribution of cirrus clouds to the global energy budget. Using radiative transfer models (RTMs), accurate simulations of cirrus clouds can enhance the understanding of the global energy budget as well as improve the use of global climate models. A newer, faster RTM such as the visible infrared imaging radiometer suite (VIIRS) fast radiative transfer model (VFRTM) is compared to a rigorous RTM such as the line-by-line radiative transfer model plus the discrete ordinates radiative transfer program. By comparing brightness temperature (BT) simulations from both models, the accuracy of the VFRTM can be obtained. This study shows root-mean-square error <0.2 K for BT difference using reanalysis data for atmospheric profiles and updated ice particle habit information from the moderate-resolution imaging spectroradiometer collection 6. At a higher resolution, the simulated results of the VFRTM are compared to the observations of VIIRS resulting in a <1.5 % error from the VFRTM for all cases. The VFRTM is validated and is an appropriate RTM to use for global cloud retrievals.
NASA Astrophysics Data System (ADS)
Kuji, M.; Hagiwara, M.; Hori, M.; Shiobara, M.
2017-12-01
Shipboard observations on cloud fraction were carried out along the round research cruise between East Asia and Antarctica from November 2015 to Aril 2016 using a whole-sky camera and a ceilometer onboard Research Vessel (R/V) Shirase. We retrieved cloud fraction from the whole-sky camera based on the brightness and color of the images, while we estimated cloud fraction from the ceilometer as a cloud frequency of occurrence. As a result, the average cloud fractions over outward open ocean, sea ice region, and returning openocean were approximately 56% (60%), 44% (64%), and 67% (72%), respectively, with the whole-sky camera (ceilometer). The comparison of the daily-averaged cloud fractions from the whole-sky camera and the ceilometer, it is found that the correlation coefficient was 0.73 for the 129 match-up dataset between East Asia and Antarctica including sea ice region as well as open ocean. The results are qualitatively consistent between the two observations as a whole, but there exists some underestimation with the whole-sky camera compared to the ceilometer. One of the reasons is possibly that the imager is apt to dismiss an optically thinner clouds that can be detected by the ceilometer. On the other hand, the difference of their view angles between the imager and the ceilometer possibly affects the estimation. Therefore, it is necessary to elucidate the cloud properties with detailed match-up analyses in future. Another future task is to compare the cloud fractions with satellite observation such as MODIS cloud products. Shipboard observations in themselves are very valuable for the validation of products from satellite observation, because we do not necessarily have many validation sites over Southern Ocean and sea ice region in particular.
Global cloud database from VIRS and MODIS for CERES
NASA Astrophysics Data System (ADS)
Minnis, Patrick; Young, David F.; Wielicki, Bruce A.; Sun-Mack, Sunny; Trepte, Qing Z.; Chen, Yan; Heck, Patrick W.; Dong, Xiquan
2003-04-01
The NASA CERES Project has developed a combined radiation and cloud property dataset using the CERES scanners and matched spectral data from high-resolution imagers, the Visible Infrared Scanner (VIRS) on the Tropical Rainfall Measuring Mission (TRMM) satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua. The diurnal cycle can be well-characterized over most of the globe using the combinations of TRMM, Aqua, and Terra data. The cloud properties are derived from the imagers using state-of-the-art methods and include cloud fraction, height, optical depth, phase, effective particle size, emissivity, and ice or liquid water path. These cloud products are convolved into the matching CERES fields of view to provide simultaneous cloud and radiation data at an unprecedented accuracy. Results are available for at least 3 years of VIRS data and 1 year of Terra MODIS data. The various cloud products are compared with similar quantities from climatological sources and instantaneous active remote sensors. The cloud amounts are very similar to those from surface observer climatologies and are 6-7% less than those from a satellite-based climatology. Optical depths are 2-3 times smaller than those from the satellite climatology, but are within 5% of those from the surface remote sensing. Cloud droplet sizes and liquid water paths are within 10% of the surface results on average for stratus clouds. The VIRS and MODIS retrievals are very consistent with differences that usually can be explained by sampling, calibration, or resolution differences. The results should be extremely valuable for model validation and improvement and for improving our understanding of the relationship between clouds and the radiation budget.
NASA Astrophysics Data System (ADS)
Wang, C.; Platnick, S. E.; Meyer, K.; Ackerman, S. A.; Holz, R.; Heidinger, A.
2017-12-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi-NPP spacecraft is considered as the next generation of instrument providing operational moderate resolution imaging capabilities after the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua. However, cloud-top property (CTP) retrieval algorithms designed for the two instruments cannot be identical because of the absence of CO2 bands on VIIRS. In this study, we conduct a comprehensive sensitivity study of cloud retrievals utilizing a IR-Optimal Estimation (IROE) based algorithm. With a fast IR radiative transfer model, the IROE simultaneously retrieves cloud-top height (CTH), cloud optical thickness (COT), cloud effective radius (CER) and corresponding uncertainties using a set of IR bands. Three retrieval runs are implemented for this sensitivity study: retrievals using 1) three native VIIRS M-Bands at 750m resolution (8.5-, 11-, and 12-μm), 2) three native VIIRS M-Bands with spectrally integrated CO2 bands from the Cross-Track Infrared Sounder (CrIS), and 3) six MODIS IR bands (8.5-, 11-, 12-, 13.3-, 13.6-, and 13.9-μm). We select a few collocated MODIS and VIIRS granules for pixel-level comparison. Furthermore, aggregated daily and monthly cloud properties from the three runs are also compared. It shows that, the combined VIIRS/CrIS run agrees well with the MODIS-only run except for pixels near cloud edges. The VIIRS-only run is close to its counterparts when clouds are optically thick. However, for optically thin clouds, the VIIRS-only run can be readily influenced by the initial guess. Large discrepancies and uncertainties can be found for optically thin clouds from the VIIRS-only run.
2010-06-16
ISS024-E-006136 (16 June 2010) --- Polar mesospheric clouds, illuminated by an orbital sunrise, are featured in this image photographed by an Expedition 24 crew member on the International Space Station. Polar mesospheric, or noctilucent (?night shining?), clouds are observed from both Earth?s surface and in orbit by crew members aboard the space station. They are called night-shining clouds as they are usually seen at twilight. Following the setting of the sun below the horizon and darkening of Earth?s surface, these high clouds are still briefly illuminated by sunlight. Occasionally the ISS orbital track becomes nearly parallel to Earth?s day/night terminator for a time, allowing polar mesospheric clouds to be visible to the crew at times other than the usual twilight due to the space station altitude. This unusual photograph shows polar mesospheric clouds illuminated by the rising, rather than setting, sun at center right. Low clouds on the horizon appear yellow and orange, while higher clouds and aerosols are illuminated a brilliant white. Polar mesospheric clouds appear as light blue ribbons extending across the top of the image. These clouds typically occur at high latitudes of both the Northern and Southern Hemispheres, and at fairly high altitudes of 76?85 kilometers (near the boundary between the mesosphere and thermosphere atmospheric layers). The ISS was located over the Greek island of Kos in the Aegean Sea (near the southwestern coastline of Turkey) when the image was taken at approximately midnight local time. The orbital complex was tracking northeastward, nearly parallel to the terminator, making it possible to observe an apparent ?sunrise? located almost due north. A similar unusual alignment of the ISS orbit track, terminator position, and seasonal position of Earth?s orbit around the sun allowed for striking imagery of polar mesospheric clouds over the Southern Hemisphere earlier this year.
Outcome of the third cloud retrieval evaluation workshop
NASA Astrophysics Data System (ADS)
Roebeling, Rob; Baum, Bryan; Bennartz, Ralf; Hamann, Ulrich; Heidinger, Andy; Thoss, Anke; Walther, Andi
2013-05-01
Accurate measurements of global distributions of cloud parameters and their diurnal, seasonal, and interannual variations are needed to improve understanding of the role of clouds in the weather and climate system, and to monitor their time-space variations. Cloud properties retrieved from satellite observations, such as cloud vertical placement, cloud water path and cloud particle size, play an important role for such studies. In order to give climate and weather researchers more confidence in the quality of these retrievals their validity needs to be determined and their error characteristics must be quantified. The purpose of the Cloud Retrieval Evaluation Workshop (CREW), held from 15-18 Nov. 2011 in Madison, Wisconsin, USA, is to enhance knowledge on state-of-art cloud properties retrievals from passive imaging satellites, and pave the path towards optimizing these retrievals for climate monitoring as well as for the analysis of cloud parameterizations in climate and weather models. CREW also seeks to observe and understand methods used to prepare daily and monthly cloud parameter climatologies. An important workshop component is discussion on results of the algorithm and sensor comparisons and validation studies. Hereto a common database with about 12 different cloud properties retrievals from passive imagers (MSG, MODIS, AVHRR, POLDER and/or AIRS), complemented with cloud measurements that serve as a reference (CLOUDSAT, CALIPSO, AMSU, MISR), was prepared for a number of "golden days". The passive imager cloud property retrievals were inter-compared and validated against Cloudsat, Calipso and AMSU observations. In our presentation we summarize the outcome of the inter-comparison and validation work done in the framework of CREW, and elaborate on reasons for observed differences. More in depth discussions were held on retrieval principles and validation, and utilization of cloud parameters for climate research. This was done in parallel breakout sessions on cloud vertical placement, cloud physical properties, and cloud climatologies. We present the recommendations of these sessions, propose a way forward to establish international partnerships on cloud research, and summarize actions defined to tailor CREW activities to missions of international programs, such as the Global Energy and Water Cycle Experiment (GEWEX) and Sustained, Co-Ordinated Processing of Environmental Satellite Data for Climate Monitoring (SCOPE-CM). Finally, attention is given to increase the traceability and uniformity of different longterm and homogeneous records of cloud parameters.
Feeney, James M; Montgomery, Stephanie C; Wolf, Laura; Jayaraman, Vijay; Twohig, Michael
2016-09-01
Among transferred trauma patients, challenges with the transfer of radiographic studies include problems loading or viewing the studies at the receiving hospitals, and problems manipulating, reconstructing, or evalu- ating the transferred images. Cloud-based image transfer systems may address some ofthese problems. We reviewed the charts of patients trans- ferred during one year surrounding the adoption of a cloud computing data transfer system. We compared the rates of repeat imaging before (precloud) and af- ter (postcloud) the adoption of the cloud-based data transfer system. During the precloud period, 28 out of 100 patients required 90 repeat studies. With the cloud computing transfer system in place, three out of 134 patients required seven repeat films. There was a statistically significant decrease in the proportion of patients requiring repeat films (28% to 2.2%, P < .0001). Based on an annualized volume of 200 trauma patient transfers, the cost savings estimated using three methods of cost analysis, is between $30,272 and $192,453.
H-alpha images of the Cygnus Loop - A new look at shock-wave dynamics in an old supernova remnant
NASA Technical Reports Server (NTRS)
Fesen, Robert A.; Kwitter, Karen B.; Downes, Ronald A.
1992-01-01
Attention is given to deep H-alpha images of portions of the east, west, and southwest limbs of the Cygnus Loop which illustrate several aspects of shock dynamics in a multiphase interstellar medium. An H-alpha image of the isolated eastern shocked cloud reveals cloud deformation and gas stripping along the cloud's edges, shock front diffraction and reflection around the rear of the cloud, and interior remnant emission due to upstream shock reflection. A faint Balmer-dominated filament is identified 30 arcmin further west of the remnant's bright line of western radiative filaments. This detection indicates a far more westerly intercloud shock front position than previously realized, and resolves the nature of the weak X-ray, optical, and nonthermal radio emission observed west of NGC 6960. Strongly curved Balmer-dominated filaments along the remnant's west and southwest edge may indicate shock diffraction caused by shock wave passage in between clouds.
NASA Astrophysics Data System (ADS)
Wilkinson, Mark; Beven, Keith; Brewer, Paul; El-khatib, Yehia; Gemmell, Alastair; Haygarth, Phil; Mackay, Ellie; Macklin, Mark; Marshall, Keith; Quinn, Paul; Stutter, Marc; Thomas, Nicola; Vitolo, Claudia
2013-04-01
Today's world is dominated by a wide range of informatics tools that are readily available to a wide range of stakeholders. There is growing recognition that the appropriate involvement of local communities in land and water management decisions can result in multiple environmental, economic and social benefits. Therefore, local stakeholder groups are increasingly being asked to participate in decision making alongside policy makers, government agencies and scientists. As such, addressing flooding issues requires new ways of engaging with the catchment and its inhabitants at a local level. To support this, new tools and approaches are required. The growth of cloud based technologies offers new novel ways to facilitate this process of exchange of information in earth sciences. The Environmental Virtual Observatory Pilot project (EVOp) is a new initiative from the UK Natural Environment Research Council (NERC) designed to deliver proof of concept for new tools and approaches to support the challenges as outlined above (http://www.evo-uk.org/). The long term vision of the Environmental Virtual Observatory is to: • Make environmental data more visible and accessible to a wide range of potential users including public good applications; • Provide tools to facilitate the integrated analysis of data, greater access to added knowledge and expert analysis and visualisation of the results; • Develop new, added-value knowledge from public and private sector data assets to help tackle environmental challenges. As part of the EVO pilot, an interactive cloud based tool has been developed with local stakeholders. The Local Landscape Visualisation Tool attempts to communicate flood risk in local impacted communities. The tool has been developed iteratively to reflect the needs, interests and capabilities of a wide range of stakeholders. This tool (assessable via a web portal) combines numerous cloud based tools and services, local catchment datasets, hydrological models and novel visualisation techniques. This pilot tool has been developed by engaging with different stakeholder groups in three catchments in the UK; the Afon Dyfi (Wales), the River Tarland (Scotland) and the River Eden (England). Stakeholders were interested in accessing live data in their catchments and looking at different land use change scenarios on flood peaks. Visualisation tools have been created which offer access to real time data (such as river level, rainfall and webcam images). Other tools allow land owners to use cloud based models (example presented here uses Topmodel, a rainfall-runoff model, on a custom virtual machine image on Amazon web services) and local datasets to explore future land use scenarios, allowing them to understand the associated flood risk. Different ways to communicate model uncertainty are currently being investigated and discussed with stakeholders. In summary the pilot project has had positive feedback and has evolved into two unique parts; a web based map tool and a model interface tool. Users can view live data from different sources, combine different data types together (data mash-up), develop local scenarios for land use and flood risk and exploit the dynamic, elastic cloud modelling capability. This local toolkit will reside within a wider EVO platform that will include national and global datasets, models and state of the art cloud computer systems.
Cloud computing in medical imaging.
Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R
2013-07-01
Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.
Computer image processing of up-draft flow motion and severe storm formation observed from satellite
NASA Technical Reports Server (NTRS)
Hung, R. J.; Smith, R. E.
1985-01-01
Special rapid-scan satellite visible and infrared observations have been used to study the life cycle of the clouds from the initiation of updraft flow motion in the atmosphere, the condensation of humid air, the formation of clouds, the development of towering cumulus, the penetration of the tropopause, the collapsing of an overshooting turret, and the dissipation of cloud. The infrared image provides an indication of the equivalent blackbody temperature of the observed cloud tops. By referencing the temperature, height and humidity profiles from rawinsonde observations as the background meteorological data for the instability of the air mass to the satellite infrared data sets at different time periods, the development of convective clouds can be studied in detail.
Gemini Planet Imager Spectroscopy of the HR 8799 Planets c and d
Ingraham, Patrick; Marley, Mark S.; Saumon, Didier; ...
2014-09-30
During the first-light run of the Gemini Planet Imager we obtained K-band spectra of exoplanets HR 8799 c and d. Analysis of the spectra indicates that planet d may be warmer than planet c. Comparisons to recent patchy cloud models and previously obtained observations over multiple wavelengths confirm that thick clouds combined with horizontal variation in the cloud cover generally reproduce the planets’ spectral energy distributions.When combined with the 3 to 4μm photometric data points, the observations provide strong constraints on the atmospheric methane content for both planets. Lastly, the data also provide further evidence that future modeling efforts mustmore » include cloud opacity, possibly including cloud holes, disequilibrium chemistry, and super-solar metallicity.« less
Observation of a cavitation cloud in tissue using correlation between ultrafast ultrasound images.
Prieur, Fabrice; Zorgani, Ali; Catheline, Stefan; Souchon, Rémi; Mestas, Jean-Louis; Lafond, Maxime; Lafon, Cyril
2015-07-01
The local application of ultrasound is known to improve drug intake by tumors. Cavitating bubbles are one of the contributing effects. A setup in which two ultrasound transducers are placed confocally is used to generate cavitation in ex vivo tissue. As the transducers emit a series of short excitation bursts, the evolution of the cavitation activity is monitored using an ultrafast ultrasound imaging system. The frame rate of the system is several thousands of images per second, which provides several tens of images between consecutive excitation bursts. Using the correlation between consecutive images for speckle tracking, a decorrelation of the imaging signal appears due to the creation, fast movement, and dissolution of the bubbles in the cavitation cloud. By analyzing this area of decorrelation, the cavitation cloud can be localized and the spatial extent of the cavitation activity characterized.
NASA Astrophysics Data System (ADS)
Sato, T.; Kasaba, Y.; Takahashi, Y.; Murata, I.; Uno, T.; Tokimasa, N.; Sakamoto, M.
2008-12-01
We conducted ground-based observation of Jupiter with the liquid crystal tunable filter (LCTF) and EM-CCD camera in two methane absorption bands (700-757nm, 872-950nm at 3 nm step: total of 47 wavelengths) to derive detailed Jupiter's vertical cloud structure. The 2-meter reflector telescope at Nishi-Harima astronomical observatory in Japan was used for our observation on 26-30 May, 2008. After a series of image processing (composition of high quality images in each wavelength and geometry calibration), we converted observed intensity to absolute reflectivity at each pixel using standard star. As a result, we acquired Jupiter's data cubes with high-spatial resolution (about 1") and narrow band imaging (typically 7nm) in each methane absorption band by superimposing 30 Jupiter's images obtained in short exposure time (50 ms per one image). These data sets enable us to probe different altitudes of Jupiter from 100 mbar down to 1bar level with higher vertical resolution than using convectional interference filters. To interpret observed center-limb profiles, we developed radiative transfer code based on layer adding doubling algorithm to treat multiple scattering of solar light theoretically and extracted information on aerosol altitudes and optical properties using two-cloud model. First, we fit 5 different profiles simultaneously in continuum data (745-757 nm) to retrieve information on optical thickness of haze and single scattering albedo of cloud. Second, we fit 15 different profiles around 727nm methane absorption band and 13 different profiles around 890 nm methane absorption band to retrieve information on the aerosol altitude location and optical thickness of cloud. In this presentation, we present the results of these modeling simulations and discuss the latitudinal variations of Jupiter's vertical cloud structure.
Jupiter's Northern Hemisphere in a Methane Band (Time Set 3)
NASA Technical Reports Server (NTRS)
1997-01-01
Mosaic of Jupiter's northern hemisphere between 10 and 50 degrees latitude. Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the color and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including large white ovals, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two interacting vortices in the upper half of the mosaic is about 3500 kilometers.
Light at 889 nanometers is strongly absorbed by atmospheric methane. This mosaic shows the features of a hazy cloud layer tens of kilometers above Jupiter's main visible cloud deck. This haze varies in height but appears to be present over the entire region. Small patches of very bright clouds may be similar to terrestrial thunderstorms.North is at the top. The images are projected on a sphere, with features being foreshortened towards the north. The planetary limb runs along the right edge of the mosaic. Cloud patterns appear foreshortened as they approach the limb. The smallest resolved features are tens of kilometers in size. These images were taken on April 3, 1997, at a range of 1.4 million kilometers by the Solid State Imaging system (CCD) on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoNASA Technical Reports Server (NTRS)
Albrecht, Bruce A.; Barlow, Roy W.
1990-01-01
Satellite images often show significant variations in the structure of marine stratocumulus clouds on scales ranging from 10 to 1000 km. This is illustrated where a GOES West satellite image shows a well-defined variation in cloud structure near 32 N, 122 W on 30 June 1987. Aircraft measurements were made with the UK C-130 and the NCAR Electra on this day as part of the FIRE Marine Stratocumulus Intensive Field Observations (IFO). The mean, turbulent, and the microphysical structure of the clouds sampled in these two areas are compared an an attempt is made to explain the differences in cloud structure. In an attempt to identify any systematic differences between the measurements made with the two aircraft, data were analyzed that were collected on 14 July 1987 with the C-130 and the Electra flying in close formation at an altitude of 250 m. The microphysical and turbulence data are being compared in an attempt to explain the differences in the cloud liquid water content obtained with the two aircraft and the differences in cloud structure shown by the GOES image. In addition, data are being analyzed for three other days during the experiment when coordinated downstream flights were made with the Electra and the C-130.
Xu, Zhen; Raghavan, Mekhala; Hall, Timothy L; Chang, Ching-Wei; Mycek, Mary-Ann; Fowlkes, J Brian; Cain, Charles A
2007-10-01
Our recent studies have demonstrated that mechanical fractionation of tissue structure with sharply demarcated boundaries can be achieved using short (< 20 micros), high intensity ultrasound pulses delivered at low duty cycles. We have called this technique histotripsy. Histotripsy has potential clinical applications where noninvasive tissue fractionation and/or tissue removal are desired. The primary mechanism of histotripsy is thought to be acoustic cavitation, which is supported by a temporally changing acoustic backscatter observed during the histotripsy process. In this paper, a fast-gated digital camera was used to image the hypothesized cavitating bubble cloud generated by histotripsy pulses. The bubble cloud was produced at a tissue-water interface and inside an optically transparent gelatin phantom which mimics bulk tissue. The imaging shows the following: (1) Initiation of a temporally changing acoustic backscatter was due to the formation of a bubble cloud; (2) The pressure threshold to generate a bubble cloud was lower at a tissue-fluid interface than inside bulk tissue; and (3) at higher pulse pressure, the bubble cloud lasted longer and grew larger. The results add further support to the hypothesis that the histotripsy process is due to a cavitating bubble cloud and may provide insight into the sharp boundaries of histotripsy lesions.
High Speed Imaging of Bubble Clouds Generated in Pulsed Ultrasound Cavitational Therapy—Histotripsy
Xu, Zhen; Raghavan, Mekhala; Hall, Timothy L.; Chang, Ching-Wei; Mycek, Mary-Ann; Fowlkes, J. Brian; Cain, Charles A.
2009-01-01
Our recent studies have demonstrated that mechanical fractionation of tissue structure with sharply demarcated boundaries can be achieved using short (<20 μs), high intensity ultrasound pulses delivered at low duty cycles. We have called this technique histotripsy. Histotripsy has potential clinical applications where noninvasive tissue fractionation and/or tissue removal are desired. The primary mechanism of histotripsy is thought to be acoustic cavitation, which is supported by a temporally changing acoustic backscatter observed during the histotripsy process. In this paper, a fast-gated digital camera was used to image the hypothesized cavitating bubble cloud generated by histotripsy pulses. The bubble cloud was produced at a tissue-water interface and inside an optically transparent gelatin phantom which mimics bulk tissue. The imaging shows the following: 1) Initiation of a temporally changing acoustic backscatter was due to the formation of a bubble cloud; 2) The pressure threshold to generate a bubble cloud was lower at a tissue-fluid interface than inside bulk tissue; and 3) at higher pulse pressure, the bubble cloud lasted longer and grew larger. The results add further support to the hypothesis that the histotripsy process is due to a cavitating bubble cloud and may provide insight into the sharp boundaries of histotripsy lesions. PMID:18019247
Jupiter Pearl and Swirling Cloud Tops
2017-01-19
This amateur-processed image was taken on Dec. 11, 2016, at 9:27 a.m. PST (12:27 p.m. EST), as NASA's Juno spacecraft performed its third close flyby of Jupiter. At the time the image was taken, the spacecraft was about 15,200 miles (24,400 kilometers) from the gas giant planet. The citizen scientist (Eric Jorgensen) cropped the JunoCam image and enhanced the color to draw attention to Jupiter's swirling clouds southeast of the "pearl." The "pearl" is one of eight massive rotating storms at 40 degrees south latitude on Jupiter, known colloquially as the "string of pearls." The processing of this image highlights the turbulence of the clouds in the south temperate belt of the planet. http://photojournal.jpl.nasa.gov/catalog/PIA21377
NASA Technical Reports Server (NTRS)
Eslinger, David L.; O'Brien, James J.; Iverson, Richard L.
1989-01-01
Empirical-orthogonal-function (EOF) analyses were carried out on 36 images of the Mid-Atlantic Bight and the Gulf of Maine, obtained by the CZCS aboard Nimbus 7 for the time period from February 28 through July 9, 1979, with the purpose of determining pigment concentrations in coastal waters. The EOF procedure was modified so as to include images with significant portions of data missing due to cloud obstruction, making it possible to estimate pigment values in areas beneath clouds. The results of image analyses explained observed variances in pigment concentrations and showed a south-to-north pattern corresponding to an April Mid-Atlantic Bight bloom and a June bloom over Nantucket Shoals and Platts Bank.
Comparison Between CCCM and CloudSat Radar-Lidar (RL) Cloud and Radiation Products
NASA Technical Reports Server (NTRS)
Ham, Seung-Hee; Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny
2015-01-01
To enhance cloud properties, LaRC and CIRA developed each combination algorithm for obtained properties from passive, active and imager in A-satellite constellation. When comparing global cloud fraction each other, LaRC-produced CERES-CALIPSO-CloudSat-MODIS (CCCM) products larger low-level cloud fraction over tropic ocean, while CIRA-produced Radar-Lidar (RL) shows larger mid-level cloud fraction for high latitude region. The reason for different low-level cloud fraction is due to different filtering method of lidar-detected cloud layers. Meanwhile difference in mid-level clouds is occurred due to different priority of cloud boundaries from lidar and radar.
Clouds Sailing Overhead on Mars, Enhanced
2017-08-09
Wispy clouds float across the Martian sky in this accelerated sequence of enhanced images from NASA's Curiosity Mars rover. The rover's Navigation Camera (Navcam) took these eight images over a span of four minutes early in the morning of the mission's 1,758th Martian day, or sol (July 17, 2017), aiming nearly straight overhead. They have been processed by first making a "flat field' adjustment for known differences in sensitivity among pixels and correcting for camera artifacts due to light reflecting within the camera, and then generating an "average" of all the frames and subtracting that average from each frame. This subtraction results in emphasizing any changes due to movement or lighting. The clouds are also visible, though fainter, in a raw image sequence from these same observations. On the same Martian morning, Curiosity also observed clouds near the southern horizon. The clouds resemble Earth's cirrus clouds, which are ice crystals at high altitudes. These Martian clouds are likely composed of crystals of water ice that condense onto dust grains in the cold Martian atmosphere. Cirrus wisps appear as ice crystals fall and evaporate in patterns known as "fall streaks" or "mare's tails." Such patterns have been seen before at high latitudes on Mars, for instance by the Phoenix Mars Lander in 2008, and seasonally nearer the equator, for instance by the Opportunity rover. However, Curiosity has not previously observed such clouds so clearly visible from the rover's study area about five degrees south of the equator. The Hubble Space Telescope and spacecraft orbiting Mars have observed a band of clouds to appear near the Martian equator around the time of the Martian year when the planet is farthest from the Sun. With a more elliptical orbit than Earth's, Mars experiences more annual variation than Earth in its distance from the Sun. The most distant point in an orbit around the Sun is called the aphelion. The near-equatorial Martian cloud pattern observed at that time of year is called the "aphelion cloud belt." These new images from Curiosity were taken about two months before aphelion, but the morning clouds observed may be an early stage of the aphelion cloud belt. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21841
Hurricane Katrina as Observed by NASA's Spaceborne Atmospheric Infrared Sounder (AIRS)
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1: click on image for larger AIRS microwave image At 1:30 a.m. local time this morning, the remnants of (now Tropical Depression) Katrina were centered on the Mississippi-Tennessee border. This microwave image from the Atmospheric Infrared Sounder instrument on NASA's Aqua spacecrat shows that the area of most intense precipitation was concentrated to the north of the center of activity. The infrared image shows how the storms look through an AIRS Infrared window channel. Window channels measure the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures are associated with high, cold cloud tops that make up the top of the hurricane. The infrared signal does not penetrate through clouds, so the purple color indicates the cool cloud tops of the storm. In cloud-free areas, the infrared signal is retrieved at the Earth's surface, revealing warmer temperatures. Cooler areas are pushing to purple and warmer areas are pushing to red. The microwave image (figure 1) reveals where the heaviest precipitation in the hurricane is taking place. The blue areas within the storm show the location of this heavy precipitation. Blue areas outside of the storm where there are moderate or no clouds are where the cold (in the microwave sense) sea surface shines through. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramson, Anne; Kenney, Jeffrey D. P., E-mail: anne.abramson@yale.edu, E-mail: jeff.kenney@yale.edu
We present the highest-resolution study to date of the interstellar medium (ISM) in galaxies undergoing ram pressure stripping, using Hubble Space Telescope BVI imaging of NGC 4522 and NGC 4402, Virgo Cluster spirals that are well known to be experiencing intracluster medium (ICM) ram pressure. We find that throughout most of both galaxies, the main dust lane has a fairly well-defined edge, with a population of giant molecular cloud (GMC) sized (tens- to hundreds-of-pc scale), isolated, highly extincting dust clouds located up to ∼1.5 kpc radially beyond it. Outside of these dense clouds, the area has little or no diffusemore » dust extinction, indicating that the clouds have decoupled from the lower-density ISM material that has already been stripped. Several of the dust clouds have elongated morphologies that indicate active ram pressure, including two large (kpc scale) filaments in NGC 4402 that are elongated in the projected ICM wind direction. We calculate a lower limit on the H I + H{sub 2} masses of these clouds based on their dust extinctions and find that a correction factor of ∼10 gives cloud masses consistent with those measured in CO for clouds of similar diameters, probably due to the complicating factors of foreground light, cloud substructure, and resolution limitations. Assuming that the clouds' actual masses are consistent with those of GMCs of similar diameters (∼10{sup 4}-10{sup 5} M {sub ☉}), we estimate that only a small fraction (∼1%-10%) of the original H I + H{sub 2} remains in the parts of the disks with decoupled clouds. Based on Hα images, a similar fraction of star formation persists in these regions, 2%-3% of the estimated pre-stripping star formation rate. We find that the decoupled cloud lifetimes may be up to 150-200 Myr.« less
A New View on Jupiter's North Pole
2018-03-07
This computer-generated image is based on an infrared image of Jupiter's north polar region that was acquired on February 2, 2017, by the Jovian Infrared Auroral Mapper (JIRAM) instrument aboard Juno during the spacecraft's fourth pass over Jupiter. The image shows the structure of the cyclonic pattern observed over Jupiter's North pole: a central cyclone surrounded by eight circumpolar cyclones with diameters ranging from 2,500 to 2,900 miles (4,000 to 4,600 kilometers) across. JIRAM is able to collect images in the infrared wavelengths around 5 micrometers (µm) by measuring the intensity of the heat coming out of the planet. The heat from a planet that is radiated into space is called the radiance. This image is an enhancement of the original JIRAM image. In order to give the picture a 3-D shape, the enhancement starts from the idea that where the radiance has its highest value, there are no clouds and JIRAM can see deeper into the atmosphere. Consequently, all the other areas of the image are originally shaded more or less by clouds of different thickness. Then, to create these pictures, the originals have been inverted to give the thicker clouds the whitish color and the third dimension as the clouds we normally see here in the Earth's atmosphere. https://photojournal.jpl.nasa.gov/catalog/PIA22336
Shocked and Scorched - Free-Floating Evaporating Gas Globules and Star Formation
NASA Astrophysics Data System (ADS)
Sahai, Raghvendra; Morris, Mark R.; Claussen, Mark J.
2014-07-01
Massive stars have a strong feedback effect on their environment, via their winds, UV radiation, and ultimately, supernova blast waves, all of which can alter the likelihood for the formation of stars in nearby clouds and limit the accretion process of nearby protostars. Free-floating Evaporating Gaseous Globules, or frEGGs, are a newly recognized class of stellar nurseries embedded within the giant HII regions found in massive star-formation region (MSFRs). We recently discovered the prototype frEGG in the Cygnus MSFR with HST. Further investigation using the Spitzer and Herschel archives have revealed a much larger number (>50) in Cygnus and other MSFRs. Our molecular-line observations of these show the presence of dense clouds with total masses of cool molecular gas exceeding 0.5 to a few Msun associated with these objects, thereby disproving the initial hypothesis based on their morphology that these have an origin similar to the proplyds (cometary-shaped photoevaporating protoplanetary disks) found in Orion. We report the results of our molecular-line studies and detailed high-resolution optical (with HST) or near-IR (with AO at the Keck Observatory) imaging of a few frEGGs in Cygnus, Carina and the W5 MSFRs. The images show the presence of young stars with associated outflow cavities and/or jets in the heads of the tadpole-shaped frEGGs. These results support our hypothesis that frEGGs are density concentrations originating in giant molecular clouds, that, when subject to the compression by the strong winds and ionization from massive stars in these MSFRs, become active star-forming cores. In summary, by virtue of their distinct, isolated morphologies, frEGGs offer us a clean probe of triggered star formation on small scales in the vicinity of massive stars.
High-Resolution Imaging of the Multiphase Interstellar Thick Disk in Two Edge-On Spiral Galaxies
NASA Astrophysics Data System (ADS)
Howk, J. Christopher; Rueff, K.
2009-01-01
We present broadband and narrow-band images, acquired from Hubble Space Telescope WFPC2 and WIYN 3.5 m telescope respectively, of two edge-on spiral galaxies, NGC 4302 and NGC 4013. These high-resolution images (BVI + H-alpha) provide a detailed view of the thick disk interstellar medium (ISM) in these galaxies. Both galaxies show prominent extraplanar dust-bearing clouds viewed in absorption against the background stellar light. Individual clouds are found to z 2 kpc in each galaxy. These clouds each contain >10^4 to >10^5 solar masses of gas. Both galaxies have extraplanar diffuse ionized gas (DIG), as seen in our H-alpha images and earlier work. In addition to the DIG, discrete H II regions are found at heights up to 1 kpc from both galaxies. We compare the morphologies of the dusty clouds with the DIG in these galaxies and discuss the relationship between these components of the thick disk ISM.
NASA AIRS Examines Hurricane Matthew Cloud Top Temperatures
2016-10-07
At 11:29 p.m. PDT on Oct. 6 (2:29 a.m. EDT on Oct. 7), NASA's Atmospheric Infrared Sounder (AIRS) instrument on NASA's Aqua satellite produced this false-color infrared image of Matthew as the storm moved up Florida's central coast. The image shows the temperature of Matthew's cloud tops or the surface of Earth in cloud-free regions, with the most intense thunderstorms shown in purples and blues. http://photojournal.jpl.nasa.gov/catalog/PIA21097
NASA Astrophysics Data System (ADS)
Prigent, Catherine; Wang, Die; Aires, Filipe; Jimenez, Carlos
2017-04-01
The meteorological observations from satellites in the microwave domain are currently limited to below 190 GHz. However, the next generation of European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Polar System-Second Generation-EPS-SG will carry an instrument, the Ice Cloud Imager (ICI), with frequencies up to 664 GHz, to improve the characterization of the cloud frozen phase. In this paper, a statistical retrieval of cloud parameters for ICI is developed, trained on a synthetic database derived from the coupling of a mesoscale cloud model and radiative transfer calculations. The hydrometeor profiles simulated with the Weather Research and Forecasting model (WRF) for twelve diverse European mid-latitude situations are used to simulate the brightness temperatures with the Atmospheric Radiative Transfer Simulator (ARTS) to prepare the retrieval database. The WRF+ARTS simulations have been compared to the Special Sensor Microwave Imager/Sounder (SSMIS) observations up to 190 GHz: this successful evaluation gives us confidence in the simulations at the ICI channels from 183 to 664 GHz. Statistical analyses have been performed on this simulated retrieval database, showing that it is not only physically realistic but also statistically satisfactory for retrieval purposes. A first Neural Network (NN) classifier is used to detect the cloud presence. A second NN is developed to retrieve the liquid and ice integrated cloud quantities over sea and land separately. The detection and retrieval of the hydrometeor quantities (i.e., ice, snow, graupel, rain, and liquid cloud) are performed with ICI-only, and with ICI combined with observations from the MicroWave Imager (MWI, with frequencies from 19 to 190 GHz, also on board MetOp-SG). The ICI channels have been optimized for the detection and quantification of the cloud frozen phases: adding the MWI channels improves the performance of the vertically integrated hydrometeor contents, especially for the cloud liquid phases. The relative error for the retrieved integrated frozen water content (FWP, i.e., ice+snow+graupel) is below 40% for 0.1kg/m2 < FWP < 0.5kg/m2 and below 20% for FWP > 0.5 kg/m2.
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail; Sinitsyn, Alexey
2017-04-01
Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation directly. The accuracy of this approach varies depending on algorithm choice. Deep neural networks demonstrated the best accuracy of more than 96%. We will demonstrate some approaches and the most influential statistical features of all-sky images that lets the algorithm reach that high accuracy. With the use of our new optical package a set of over 480`000 samples has been collected in several sea missions in 2014-2016 along with concurrent standard human observed and instrumentally recorded meteorological parameters. We will demonstrate the results of the field measurements and will discuss some still remaining problems and the potential of the further developments of machine learning approach.
NASA Astrophysics Data System (ADS)
Ikegawa, Shinichi; Horinouchi, Takeshi
2016-06-01
Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.
mPano: cloud-based mobile panorama view from single picture
NASA Astrophysics Data System (ADS)
Li, Hongzhi; Zhu, Wenwu
2013-09-01
Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.
Cloud Arcs in the Western Pacific
NASA Technical Reports Server (NTRS)
2002-01-01
Small cumulus clouds in this natural-color view from the Multi-angle Imaging SpectroRadiometer have formed a distinctive series of quasi-circular arcs. Clues regarding the formation of these arcs can be found by noting that larger clouds exist in the interior of each arc.The interior clouds are thicker and likely to be more convectively active than the other clouds, causing much of the air near the centers of the arcs to rise. This air spreads out horizontally in all directions as it rises and continues to spread out as it begins to sink back to the surface. This pushes any existing small cumulus clouds away from the central region of convection.As the air sinks, it also warms, preventing other small clouds from forming, so that the regions just inside the arcs are kept clear. At the arcs, the horizontal flow of sinking air is now quite weak and on meeting the undisturbed air it can rise again slightly -- possibly assisting in the formation of new small cumulus clouds. Although examples of the continuity of air, in which every rising air motion must be compensated by a sinking motion elsewhere, are very common, the degree of organization exhibited here is relatively rare, as the wind field at different altitudes usually disrupts such patterns. The degree of self organization of this cloud image, whereby three or four such circular events form a quasi-periodic pattern, probably also requires a relatively uncommon combination of wind, temperature and humidity conditions for it to occur.The image was acquired by MISR's nadir camera on March 11, 2002, and is centered west of the Marshall Islands. Enewetak Atoll is discernible through thin cloud as the turquoise band near the right-hand edge of the image.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. This image is a portion of the data acquired during Terra orbit 11863, and covers an area of about 380 kilometers x 345 kilometers. It utilizes data from blocks 80 to 82 within World Reference System-2 path 90.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.2017-06-22
This enhanced-color image of Jupiter's bands of light and dark clouds was created by citizen scientists Gerald Eichstädt and Seán Doran using data from the JunoCam imager on NASA's Juno spacecraft. Three of the white oval storms known as the "String of Pearls" are visible near the top of the image. Each of the alternating light and dark atmospheric bands in this image is wider than Earth, and each rages around Jupiter at hundreds of miles (kilometers) per hour. The lighter areas are regions where gas is rising, and the darker bands are regions where gas is sinking. Juno acquired the image on May 19, 2017, at 11:30 a.m. PST (2:30 p.m. EST) from an altitude of about 20,800 miles (33,400 kilometers) above Jupiter's cloud tops. https://photojournal.jpl.nasa.gov/catalog/PIA21393
Leveraging the Cloud for Robust and Efficient Lunar Image Processing
NASA Technical Reports Server (NTRS)
Chang, George; Malhotra, Shan; Wolgast, Paul
2011-01-01
The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use encounters with remote resources. As part of this discussion this paper will outline some of the technologies employed, the reasons for their selection, the resulting performance metrics and the direction the project is headed based upon the demonstrated capabilities thus far.
Solar Resource Assessment with Sky Imagery and a Virtual Testbed for Sky Imager Solar Forecasting
NASA Astrophysics Data System (ADS)
Kurtz, Benjamin Bernard
In recent years, ground-based sky imagers have emerged as a promising tool for forecasting solar energy on short time scales (0 to 30 minutes ahead). Following the development of sky imager hardware and algorithms at UC San Diego, we present three new or improved algorithms for sky imager forecasting and forecast evaluation. First, we present an algorithm for measuring irradiance with a sky imager. Sky imager forecasts are often used in conjunction with other instruments for measuring irradiance, so this has the potential to decrease instrumentation costs and logistical complexity. In particular, the forecast algorithm itself often relies on knowledge of the current irradiance which can now be provided directly from the sky images. Irradiance measurements are accurate to within about 10%. Second, we demonstrate a virtual sky imager testbed that can be used for validating and enhancing the forecast algorithm. The testbed uses high-quality (but slow) simulations to produce virtual clouds and sky images. Because virtual cloud locations are known, much more advanced validation procedures are possible with the virtual testbed than with measured data. In this way, we are able to determine that camera geometry and non-uniform evolution of the cloud field are the two largest sources of forecast error. Finally, with the assistance of the virtual sky imager testbed, we develop improvements to the cloud advection model used for forecasting. The new advection schemes are 10-20% better at short time horizons.
Dalpé, Gratien; Joly, Yann
2014-09-01
Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services. © 2014 Wiley Periodicals, Inc.
Detecting long-duration cloud contamination in hyper-temporal NDVI imagery
NASA Astrophysics Data System (ADS)
Ali, Amjad; de Bie, C. A. J. M.; Skidmore, A. K.
2013-10-01
Cloud contamination impacts on the quality of hyper-temporal NDVI imagery and its subsequent interpretation. Short-duration cloud impacts are easily removed by using quality flags and an upper envelope filter, but long-duration cloud contamination of NDVI imagery remains. In this paper, an approach that goes beyond the use of quality flags and upper envelope filtering is tested to detect when and where long-duration clouds are responsible for unreliable NDVI readings, so that a user can flag those data as missing. The study is based on MODIS Terra and the combined Terra-Aqua 16-day NDVI product for the south of Ghana, where persistent cloud cover occurs throughout the year. The combined product could be assumed to have less cloud contamination, since it is based on two images per day. Short-duration cloud effects were removed from the two products through using the adaptive Savitzky-Golay filter. Then for each 'cleaned' product an unsupervised classified map was prepared using the ISODATA algorithm, and, by class, plots were prepared to depict changes over time of the means and the standard deviations in NDVI values. By comparing plots of similar classes, long-duration cloud contamination appeared to display a decline in mean NDVI below the lower limit 95% confidence interval with a coinciding increase in standard deviation above the upper limit 95% confidence interval. Regression analysis was carried out per NDVI class in two randomly selected groups in order to statistically test standard deviation values related to long-duration cloud contamination. A decline in seasonal NDVI values (growing season) were below the lower limit of 95% confidence interval as well as a concurrent increase in standard deviation values above the upper limit of the 95% confidence interval were noted in 34 NDVI classes. The regression analysis results showed that differences in NDVI class values between the Terra and the Terra-Aqua imagery were significantly correlated (p < 0.05) with the corresponding standard deviation values of the Terra imagery in case of all NDVI classes of two selected NDVI groups. The method successfully detects long-duration cloud contamination that results in unreliable NDVI values. The approach offers scientists interested in time series analysis a method of masking by area (class) the periods when pre-cleaned NDVI values remain affected by clouds. The approach requires no additional data for execution purposes but involves unsupervised classification of the imagery to carry out the evaluation of class-specific mean NDVI and standard deviation values over time.
NIMS Spectral Maps of Jupiter's Great Red Spot
NASA Technical Reports Server (NTRS)
1996-01-01
The Near-Infrared Mapping Spectrometer (NIMS) instrument looks at Jupiter's Great Red Spot, in these views from June 26, 1996. NIMS studies infrared wavelengths of light that our eye cannot see. These maps are at four different infrared wavelengths, each one picked to reveal something different about the atmosphere.
The top image is a false color map of a wavelength that is at the red edge of our ability to see. It shows the shapes of features that we would see with our eyes.The second map is of ammonia ice, red showing where the most ice is, blue where none exists. The differences between this and the first image are due to the amount and size of ammonia ice crystals.The third map down is from a wavelength that shows cloud heights, with the highest clouds in red, and the lowest in blue.The bottom map uses a wavelength that shows the hot Jupiter shining through the clouds. Red represents the thinnest clouds, and blue is thickest where it is more difficult to see below. Comparing the bottom two images, note that the highest clouds are in the center of the Great Red Spot, while there are relatively few clouds around the edges.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Gravity Waves Ripple over Marine Stratocumulus Clouds
NASA Technical Reports Server (NTRS)
2004-01-01
In this natural-color image from the Multi-angle Imaging SpectroRadiometer (MISR), a fingerprint-like gravity wave feature occurs over a deck of marine stratocumulus clouds. Similar to the ripples that occur when a pebble is thrown into a still pond, such 'gravity waves' sometimes appear when the relatively stable and stratified air masses associated with stratocumulus cloud layers are disturbed by a vertical trigger from the underlying terrain, or by a thunderstorm updraft or some other vertical wind shear. The stratocumulus cellular clouds that underlie the wave feature are associated with sinking air that is strongly cooled at the level of the cloud-tops -- such clouds are common over mid-latitude oceans when the air is unperturbed by cyclonic or frontal activity. This image is centered over the Indian Ocean (at about 38.9o South, 80.6o East), and was acquired on October 29, 2003.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 20545. The image covers an area of 245 kilometers x 378 kilometers, and uses data from blocks 121 to 122 within World Reference System-2 path 134.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Wilson, Adam M; Jetz, Walter
2016-03-01
Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km) monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties.
Mars Odyssey View of Morning Clouds in Canyon
2016-04-05
Light blue clouds fill Coprates Chasma on Mars, part of Valles Marineris, the vast Grand Canyon of Mars. The clouds are mostly ice crystals and they appear blue in color in this image from NASA Mars Odyssey.
Investigation of mesoscale cloud features viewed by LANDSAT
NASA Technical Reports Server (NTRS)
Sherr, P. E. (Principal Investigator); Feteris, P. J.; Lisa, A. S.; Bowley, C. J.; Fowler, M. G.; Barnes, J. C.
1976-01-01
The author has identified the following significant results. Some 50 LANDSAT images displaying mesoscale cloud features were analyzed. This analysis was based on the Rayleigh-Kuettner model describing the formation of that type of mesoscale cloud feature. This model lends itself to computation of the average wind speed in northerly flow from the dimensions of the cloud band configurations measured from a LANDSAT image. In nearly every case, necessary conditions of a curved wind profile and orientation of the cloud streets within 20 degrees of the direction of the mean wind in the convective layer were met. Verification of the results by direct observation was hampered, however, by the incompatibility of the resolution of conventional rawinsonde observations with the scale of the banded cloud patterns measured from LANDSAT data. Comparison seems to be somewhat better in northerly flows than in southerly flows, with the largest discrepancies in wind speed being within 8m/sec, or a factor of two.
Photographer : JPL Range : 6.5 million kilometers (4 million miles) Six violet images of Jupiter
NASA Technical Reports Server (NTRS)
1979-01-01
Photographer : JPL Range : 6.5 million kilometers (4 million miles) Six violet images of Jupiter makes the mosaic photo, showing the Great Red Spot as a swirling vortex type motion. This motion is also seen in several nearby white clouds. These bright white clouds and the Red Spot are rotating in a counter clockwise direction, except the peculiar filimentary cloud to the right of the Red Spot is going clockwise. The top of the picture shows the turbulence from the equatorial jet and more northerly atmospheric currents. The smallest clouds shown are only 70 miles (120 km) across.
Cloud and aerosol polarimetric imager
NASA Astrophysics Data System (ADS)
Zhang, Junqiang; Shao, Jianbing; Yan, Changxiang
2014-02-01
Cloud and Aerosol Polarimetric Imager (CAPI), which is the first onboard cloud and aerosol Polarimetric detector of CHINA, is developed to get cloud and aerosol data of atmosphere to retrieve aerosol optical and microphysical properties to increase the reversion precision of greenhouse gasses (GHGs). The instrument is neither a Polarization and Direction of Earth's Reflectance (POLDER) nor a Directional Polarimetric Camera (DPC) type polarized camera. It is a multispectral push broom system using linear detectors, and can get 5 bands spectral data, from ultraviolet (UV) to SWIR, of the same ground feature at the same time without any moving structure. This paper describes the CAPI instrument characteristics, composition, calibration, and the nearest development.
NASA Technical Reports Server (NTRS)
2007-01-01
The top cloud layer on Jupiter is thought to consist of ammonia ice, but most of that ammonia 'hides' from spectrometers. It does not absorb light in the same way ammonia does. To many scientists, this implies that ammonia churned up from lower layers of the atmosphere 'ages' in some way after it condenses, possibly by being covered with a photochemically generated hydrocarbon mixture. The New Horizons Linear Etalon Imaging Spectral Array (LEISA), the half of the Ralph instrument that is able to 'see' in infrared wavelengths that are absorbed by ammonia ice, spotted these clouds and watched them evolve over five Jupiter days (about 40 Earth hours). In these images, spectroscopically identified fresh ammonia clouds are shown in bright blue. The largest cloud appeared as a localized source on day 1, intensified and broadened on day 2, became more diffuse on days 3 and 4, and disappeared on day 5. The diffusion seemed to follow the movement of a dark spot along the boundary of the oval region. Because the source of this ammonia lies deeper than the cloud, images like these can tell scientists much about the dynamics and heat conduction in Jupiter's lower atmosphere.Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory
NASA Astrophysics Data System (ADS)
Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro
2016-04-01
Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.
Venus - Lower-level Clouds As Seen By NIMS
NASA Technical Reports Server (NTRS)
1990-01-01
These images are two versions of a near-infrared map of lower-level clouds on the night side of Venus, obtained by the Near Infrared Mapping Spectrometer aboard the Galileo spacecraft as it approached the planet February 10, 1990. Taken from an altitude of about 60,000 miles above the planet, at an infrared wavelength of 2.3 microns (about three times the longest wavelength visible to the human eye) the map shows the turbulent, cloudy middle atmosphere some 30-33 miles above the surface, 6-10 miles below the visible cloudtops. The image to the left shows the radiant heat from the lower atmosphere (about 400 degrees Fahrenheit) shining through the sulfuric acid clouds, which appear as much as 10 times darker than the bright gaps between clouds. This cloud layer is at about -30 degrees Fahrenheit, at a pressure about 1/2 Earth's atmospheric pressure. About 2/3 of the dark hemisphere is visible, centered on longitude 350 West, with bright slivers of daylit high clouds visible at top and bottom left. The right image, a modified negative, represents what scientists believe would be the visual appearance of this mid-level cloud deck in daylight, with the clouds reflecting sunlight instead of blocking out infrared from the hot planet and lower atmosphere. Near the equator, the clouds appear fluffy and blocky; farther north, they are stretched out into East-West filaments by winds estimated at more than 150 mph, while the poles are capped by thick clouds at this altitude. The Near Infrared Mapping Spectrometer (NIMS) on the Galileo spacecraft is a combined mapping (imaging) and spectral instrument. It can sense 408 contiguous wavelengths from 0.7 microns (deep red) to 5.2 microns, and can construct a map or image by mechanical scanning. It can spectroscopically analyze atmospheres and surfaces and construct thermal and chemical maps. Designed and operated by scientists and engineers at the Jet Propulsion Laboratory, NIMS involves 15 scientists in the U.S., England, and France. The Galileo Project is managed for NASA's Office of Space Science and Applications by JPL; its mission is to study the planet Jupiter and its satellites and magnetosphere after multiple gravity-assist flybys at Venus and the Earth.