Application of cellular automata approach for cloud simulation and rendering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher Immanuel, W.; Paul Mary Deborrah, S.; Samuel Selvaraj, R.
Current techniques for creating clouds in games and other real time applications produce static, homogenous clouds. These clouds, while viable for real time applications, do not exhibit an organic feel that clouds in nature exhibit. These clouds, when viewed over a time period, were able to deform their initial shape and move in a more organic and dynamic way. With cloud shape technology we should be able in the future to extend to create even more cloud shapes in real time with more forces. Clouds are an essential part of any computer model of a landscape or an animation ofmore » an outdoor scene. A realistic animation of clouds is also important for creating scenes for flight simulators, movies, games, and other. Our goal was to create a realistic animation of clouds.« less
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos
2016-04-01
This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Utilization of DIRSIG in support of real-time infrared scene generation
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.; Brown, Scott D.
2000-07-01
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
NASA Astrophysics Data System (ADS)
Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.
2017-10-01
The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.
Research on Visualization of Ground Laser Radar Data Based on Osg
NASA Astrophysics Data System (ADS)
Huang, H.; Hu, C.; Zhang, F.; Xue, H.
2018-04-01
Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.
AstroCloud: An Agile platform for data visualization and specific analyzes in 2D and 3D
NASA Astrophysics Data System (ADS)
Molina, F. Z.; Salgado, R.; Bergel, A.; Infante, A.
2017-07-01
Nowadays, astronomers commonly run their own tools, or distributed computational packages, for data analysis and then visualizing the results with generic applications. This chain of processes comes at high cost: (a) analyses are manually applied, they are therefore difficult to be automatized, and (b) data have to be serialized, thus increasing the cost of parsing and saving intermediary data. We are developing AstroCloud, an agile visualization multipurpose platform intended for specific analyses of astronomical images (https://astrocloudy.wordpress.com). This platform incorporates domain-specific languages which make it easily extensible. AstroCloud supports customized plug-ins, which translate into time reduction on data analysis. Moreover, it also supports 2D and 3D rendering, including interactive features in real time. AstroCloud is under development, we are currently implementing different choices for data reduction and physical analyzes.
NASA Astrophysics Data System (ADS)
Anstey, Josephine; Pape, Dave
2013-03-01
In this paper we discuss Mrs. Squandertime, a real-time, persistent simulation of a virtual character, her living room, and the view from her window, designed to be a wall-size, projected art installation. Through her large picture window, the eponymous Mrs. Squandertime watches the sea: boats, clouds, gulls, the tide going in and out, people on the sea wall. The hundreds of images that compose the view are drawn from historical printed sources. The program that assembles and animates these images is driven by weather, time, and tide data constantly updated from a real physical location. The character herself is rendered photographically in a series of slowly dissolving stills which correspond to the character's current behavior.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
Real-time photorealistic stereoscopic rendering of fire
NASA Astrophysics Data System (ADS)
Rose, Benjamin M.; McAllister, David F.
2007-02-01
We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.
Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.
Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael
2014-09-01
In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr
2005-09-01
We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.
High-fidelity real-time maritime scene rendering
NASA Astrophysics Data System (ADS)
Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin
2011-06-01
The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
Real time ray tracing based on shader
NASA Astrophysics Data System (ADS)
Gui, JiangHeng; Li, Min
2017-07-01
Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.
Real-time volume rendering of 4D image using 3D texture mapping
NASA Astrophysics Data System (ADS)
Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-05-01
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
NASA Astrophysics Data System (ADS)
Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos
2014-05-01
Mobile cloud computing is receiving world-wide momentum for ubiquitous on-demand cloud services for mobile users provided by Amazon, Google etc. with low capital cost. However, Internet-centric clouds introduce wide area network (WAN) delays that are often intolerable for real-time applications such as video streaming. One promising approach to addressing this challenge is to deploy decentralized mini-cloud facility known as cloudlets to enable localized cloud services. When supported by local wireless connectivity, a wireless cloudlet is expected to offer low cost and high performance cloud services for the users. In this work, we implement a realistic framework that comprises both a popular Internet cloud (Amazon Cloud) and a real-world cloudlet (based on Ubuntu Enterprise Cloud (UEC)) for mobile cloud users in a wireless mesh network. We focus on real-time video streaming over the HTTP standard and implement a typical application. We further perform a comprehensive comparative analysis and empirical evaluation of the application's performance when it is delivered over the Internet cloud and the cloudlet respectively. The study quantifies the influence of the two different cloud networking architectures on supporting real-time video streaming. We also enable movement of the users in the wireless mesh network and investigate the effect of user's mobility on mobile cloud computing over the cloudlet and Amazon cloud respectively. Our experimental results demonstrate the advantages of the cloudlet paradigm over its Internet cloud counterpart in supporting the quality of service of real-time applications.
An Approach of Web-based Point Cloud Visualization without Plug-in
NASA Astrophysics Data System (ADS)
Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei
2016-11-01
With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Elasticity-based three dimensional ultrasound real-time volume rendering
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.
2009-02-01
Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.
Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Graves, S. J.; Vernon, F.; Kerkez, B.; Chandra, C. V.; Keiser, K.; Martin, C.
2014-12-01
Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) Access, utilization and management of real-time data continue to be challenging for decision makers, as well as researchers in several scientific fields. This presentation will highlight infrastructure aimed at addressing some of the gaps in handling real-time data, particularly in increasing accessibility of these data to the scientific community through cloud services. The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) system addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however real-time decision-making is infeasible in many cases that require streaming scientific data as these data are locked down and sent only to proprietary in-house tools or displays. This lack of accessibility to the broader scientific community prohibits algorithm development and workflows initiated by these data streams. As part of NSF's EarthCube initiative, CHORDS proposes to make real-time data available to the academic community via cloud services. The CHORDS infrastructure will enhance the role of real-time data within the geosciences, specifically expanding the potential of streaming data sources in enabling adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.
An improved method of continuous LOD based on fractal theory in terrain rendering
NASA Astrophysics Data System (ADS)
Lin, Lan; Li, Lijun
2007-11-01
With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.
NASA Technical Reports Server (NTRS)
Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.
2002-01-01
Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Impact of different cloud deployments on real-time video applications for mobile video cloud users
NASA Astrophysics Data System (ADS)
Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos
2015-02-01
The latest trend to access mobile cloud services through wireless network connectivity has amplified globally among both entrepreneurs and home end users. Although existing public cloud service vendors such as Google, Microsoft Azure etc. are providing on-demand cloud services with affordable cost for mobile users, there are still a number of challenges to achieve high-quality mobile cloud based video applications, especially due to the bandwidth-constrained and errorprone mobile network connectivity, which is the communication bottleneck for end-to-end video delivery. In addition, existing accessible clouds networking architectures are different in term of their implementation, services, resources, storage, pricing, support and so on, and these differences have varied impact on the performance of cloud-based real-time video applications. Nevertheless, these challenges and impacts have not been thoroughly investigated in the literature. In our previous work, we have implemented a mobile cloud network model that integrates localized and decentralized cloudlets (mini-clouds) and wireless mesh networks. In this paper, we deploy a real-time framework consisting of various existing Internet cloud networking architectures (Google Cloud, Microsoft Azure and Eucalyptus Cloud) and a cloudlet based on Ubuntu Enterprise Cloud over wireless mesh networking technology for mobile cloud end users. It is noted that the increasing trend to access real-time video streaming over HTTP/HTTPS is gaining popularity among both research and industrial communities to leverage the existing web services and HTTP infrastructure in the Internet. To study the performance under different deployments using different public and private cloud service providers, we employ real-time video streaming over the HTTP/HTTPS standard, and conduct experimental evaluation and in-depth comparative analysis of the impact of different deployments on the quality of service for mobile video cloud users. Empirical results are presented and discussed to quantify and explain the different impacts resulted from various cloud deployments, video application and wireless/mobile network setting, and user mobility. Additionally, this paper analyses the advantages, disadvantages, limitations and optimization techniques in various cloud networking deployments, in particular the cloudlet approach compared with the Internet cloud approach, with recommendations of optimized deployments highlighted. Finally, federated clouds and inter-cloud collaboration challenges and opportunities are discussed in the context of supporting real-time video applications for mobile users.
YaQ: an architecture for real-time navigation and rendering of varied crowds.
Maïm, Jonathan; Yersin, Barbara; Thalmann, Daniel
2009-01-01
The YaQ software platform is a complete system dedicated to real-time crowd simulation and rendering. Fitting multiple application domains, such as video games and VR, YaQ aims to provide efficient algorithms to generate crowds comprising up to thousands of varied virtual humans navigating in large-scale, global environments.
a Framework for Voxel-Based Global Scale Modeling of Urban Environments
NASA Astrophysics Data System (ADS)
Gehrung, Joachim; Hebel, Marcus; Arens, Michael; Stilla, Uwe
2016-10-01
The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
On the performance of metrics to predict quality in point cloud representations
NASA Astrophysics Data System (ADS)
Alexiou, Evangelos; Ebrahimi, Touradj
2017-09-01
Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.
ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)
2018-01-18
average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING
Real-time range generation for ladar hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Olson, Eric M.; Coker, Charles F.
1996-05-01
Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.
ClipCard: Sharable, Searchable Visual Metadata Summaries on the Cloud to Render Big Data Actionable
NASA Astrophysics Data System (ADS)
Saripalli, P.; Davis, D.; Cunningham, R.
2013-12-01
Research firm IDC estimates that approximately 90 percent of the Enterprise Big Data go un-analyzed, as 'dark data' - an enormous corpus of undiscovered, untagged information residing on data warehouses, servers and Storage Area Networks (SAN). In the geosciences, these data range from unpublished model runs to vast survey data assets to raw sensor data. Many of these are now being collected instantaneously, at a greater volume and in new data formats. Not all of these data can be analyzed, nor processed in real time, and their features may not be well described at the time of collection. These dark data are a serious data management problem for science organizations of all types, especially ones with mandated or required data reporting and compliance requirements. Additionally, data curators and scientists are encouraged to quantify the impact of their data holdings as a way to measure research success. Deriving actionable insights is the foremost goal of Big Data Analytics (BDA), which is especially true with geoscience, given its direct impact on most of the pressing global issues. Clearly, there is a pressing need for innovative approaches to making dark data discoverable, measurable, and actionable. We report on ClipCard, a Cloud-based SaaS analytic platform for instant summarization, quick search, visualization and easy sharing of metadata summaries form the Dark Data at hierarchical levels of detail, thus rendering it 'white', i.e., actionable. We present a use case of the ClipCard platform, a cloud-based application which helps generate (abstracted) visual metadata summaries and meta-analytics for environmental data at hierarchical scales within and across big data containers. These summaries and analyses provide important new tools for managing big data and simplifying collaboration through easy to deploy sharing APIs. The ClipCard application solves a growing data management bottleneck by helping enterprises and large organizations to summarize, search, discover, and share the potential in their unused data and information assets. Using Cloud as the base platform enables wider reach, quick dissemination and easy sharing of the metadata summaries, without actually storing or sharing the original data assets per se.
Feasibility study: real-time 3-D ultrasound imaging of the brain.
Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D
2004-10-01
We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.
Real-time volume rendering of digital medical images on an iOS device
NASA Astrophysics Data System (ADS)
Noon, Christian; Holub, Joseph; Winer, Eliot
2013-03-01
Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
CEDIMS: cloud ethical DICOM image Mojette storage
NASA Astrophysics Data System (ADS)
Guédon, Jeanpierre; Evenou, Pierre; Tervé, Pierre; David, Sylvain; Béranger, Jérome
2012-02-01
Dicom images of patients will necessarily been stored in Clouds. However, ethical constraints must apply. In this paper, a method which provides the two following conditions is presented: 1) the medical information is not readable by the cloud owner since it is distributed along several clouds 2) the medical information can be retrieved from any sufficient subset of clouds In order to obtain this result in a real time processing, the Mojette transform is used. This paper reviews the interesting features of the Mojette transform in terms of information theory. Since only portions of the original Dicom files are stored into each cloud, their contents are not reachable. For instance, we use 4 different public clouds to save 4 different projections of each file, with the additional condition that any 3 over 4 projections are enough to reconstruct the original file. Thus, even if a cloud is unavailable when the user wants to load a Dicom file, the other 3 are giving enough information for real time reconstruction. The paper presents an implementation on 3 actual clouds. For ethical reasons, we use a Dicom image spreaded over 3 public clouds to show the obtained confidentiality and possible real time recovery.
Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration
NASA Astrophysics Data System (ADS)
Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.
2013-07-01
This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D representations of objects of a surveyed site are extracted from the mesh, the link with domain related documentation is done by means of a set of forms designed by experts. Information from these forms are linked with geometry such that documentation can be attached to the viewed objects. Additional semantisation methods related to specific domains have been added to the platform. Beyond realistic rendering of surveyed site, the platform embeds non photorealistic rendering (NPR) algorithms. These algorithms enable to dynamically illustrate objects of interest that are related to knowledge with specific styles. The whole platform is implemented with a Java framework and relies on an actual and effective 3D engine that make available latest rendering methods. We illustrate this work on various photogrammetric survey, in medieval archaeology with the Shawbak castle in Jordan and in underwater archaeology on different marine sites.
Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.; Martin, C. L.; Maskey, M.; Keiser, K.; Dye, M. J.
2015-12-01
The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project, funded as part of NSF's EarthCube initiative, addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however, real-time decision-making is infeasible in many cases as these streaming data are either completely inaccessible or only available to proprietary in-house tools or displays. This lack of accessibility prohibits advanced algorithm and workflow development that could be initiated or enhanced by these data streams. Small research teams do not have resources to develop tools for the broad dissemination of their valuable real-time data and could benefit from an easy to use, scalable, cloud-based solution to facilitate access. CHORDS proposes to make a very diverse suite of real-time data available to the broader geosciences community in order to allow innovative new science in these areas to thrive. This presentation will highlight recently developed CHORDS portal tools and processing systems aimed at addressing some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface deployed in the cloud. The CHORDS system will connect these real-time streams via standard services from the Open Geospatial Consortium (OGC) and does so in a way that is simple and transparent to the data provider. Broad use of the CHORDS framework will expand the role of real-time data within the geosciences, and enhance the potential of streaming data sources to enable adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Synthesis of Virtual Environments for Aircraft Community Noise Impact Studies
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Sullivan, Brenda M.
2005-01-01
A new capability has been developed for the creation of virtual environments for the study of aircraft community noise. It is applicable for use with both recorded and synthesized aircraft noise. When using synthesized noise, a three-stage process is adopted involving non-real-time prediction and synthesis stages followed by a real-time rendering stage. Included in the prediction-based source noise synthesis are temporal variations associated with changes in operational state, and low frequency fluctuations that are present under all operating conditions. Included in the rendering stage are the effects of spreading loss, absolute delay, atmospheric absorption, ground reflections, and binaural filtering. Results of prediction, synthesis and rendering stages are presented.
Enhanced backgrounds in scene rendering with GTSIMS
NASA Astrophysics Data System (ADS)
Prussing, Keith F.; Pierson, Oliver; Cordell, Chris; Stewart, John; Nielson, Kevin
2018-05-01
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T.; Momose, T.; Oku, S.
It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less
Volumetric ambient occlusion for real-time rendering and games.
Szirmay-Kalos, L; Umenhoffer, T; Toth, B; Szecsi, L; Sbert, M
2010-01-01
This new algorithm, based on GPUs, can compute ambient occlusion to inexpensively approximate global-illumination effects in real-time systems and games. The first step in deriving this algorithm is to examine how ambient occlusion relates to the physically founded rendering equation. The correspondence stems from a fuzzy membership function that defines what constitutes nearby occlusions. The next step is to develop a method to calculate ambient occlusion in real time without precomputation. The algorithm is based on a novel interpretation of ambient occlusion that measures the relative volume of the visible part of the surface's tangent sphere. The new formula's integrand has low variation and thus can be estimated accurately with a few samples.
A 3D ultrasound scanner: real time filtering and rendering algorithms.
Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M
1997-01-01
The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.
A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model
NASA Astrophysics Data System (ADS)
Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge
2016-12-01
A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
2016-06-15
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Real-time video streaming in mobile cloud over heterogeneous wireless networks
NASA Astrophysics Data System (ADS)
Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos
2012-06-01
Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Kerkez, B.; Chandrasekar, V.; Graves, S. J.; Stamps, D. S.; Dye, M. J.; Keiser, K.; Martin, C. L.; Gooch, S. R.
2016-12-01
Cloud-Hosted Real-time Data Services for the Geosciences, or CHORDS, addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Part of the broader EarthCube initiative, CHORDS seeks to investigate the role of real-time data in the geosciences. Many of the phenomenon occurring within the geosciences, ranging from hurricanes and severe weather, to earthquakes, volcanoes and floods, can benefit from better handling of real-time data. The National Science Foundation funds many small teams of researchers residing at Universities whose currently inaccessible measurements could contribute to a better understanding of these phenomenon in order to ultimately improve forecasts and predictions. This lack of easy accessibility prohibits advanced algorithm and workflow development that could be initiated or enhanced by these data streams. Often the development of tools for the broad dissemination of their valuable real-time data is a large IT overhead from a pure scientific perspective, and could benefit from an easy to use, scalable, cloud-based solution to facilitate access. CHORDS proposes to make a very diverse suite of real-time data available to the broader geosciences community in order to allow innovative new science in these areas to thrive. We highlight the recently developed CHORDS portal tools and processing systems aimed at addressing some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface deployed in the cloud. Examples shown include hydrology, atmosphere and solid earth sensors. Broad use of the CHORDS framework will expand the role of real-time data within the geosciences, and enhance the potential of streaming data sources to enable adaptive experimentation and real-time hypothesis testing. CHORDS enables real-time data to be discovered and accessed using existing standards for straightforward integration into analysis, visualization and modeling tools.
NASA Astrophysics Data System (ADS)
Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong
2006-03-01
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.; Martin, C. L.; Maskey, M.; Keiser, K.; Dye, M. J.
2015-12-01
The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project was funded under the National Science Foundation's EarthCube initiative. CHORDS addresses the ever-increasing importance of real-time scientific data in the geosciences, particularly in mission critical scenarios, where informed decisions must be made rapidly. Access to constant streams of real-time data also allow many new transient phenomena in space-time to be observed, however, much of these streaming data are either completely inaccessible or only available to proprietary in-house tools or displays. Small research teams do not have the resources to develop tools for the broad dissemination of their unique real-time data and require an easy to use, scalable, cloud-based solution to facilitate this access. CHORDS will make these diverse streams of real-time data available to the broader geosciences community. This talk will highlight a recently developed CHORDS portal tools and processing systems which address some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface that is deployed in the cloud, is scalable and is able to be customized by research teams. A running portal, with operational data feeds from across the nation, will be presented. The processing within the CHORDS system will expose these real-time streams via standard services from the Open Geospatial Consortium (OGC) in a way that is simple and transparent to the data provider, while maximizing the usage of these investments. The ingestion of high velocity, high volume and diverse data has allowed the project to explore a NoSQL database implementation. Broad use of the CHORDS framework by geoscientists will help to facilitate adaptive experimentation, model assimilation and real-time hypothesis testing.
Real-time WAMI streaming target tracking in fog
NASA Astrophysics Data System (ADS)
Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe
2016-05-01
Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.
A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-05-01
We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.
Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten
2017-11-01
Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.
Real-time generation of infrared ocean scene based on GPU
NASA Astrophysics Data System (ADS)
Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu
2007-12-01
Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.
UAS Photogrammetry for Rapid Response Characterization of Subaerial Coastal Change
NASA Astrophysics Data System (ADS)
Do, C.; Anarde, K.; Figlus, J.; Prouse, W.; Bedient, P. B.
2016-12-01
Unmanned aerial systems (UASs) provide an exciting new platform for rapid response measurement of subaerial coastal change. Here we validate the use of a coupled hobbyist UAS and optical photogrammetry framework for high-resolution mapping of portions of a low-lying barrier island along the Texas Gulf Coast. A DJI Phantom 3 Professional was used to capture 2D nadir images of the foreshore and back-beach environments containing both vegetated and non-vegetated features. The images were georeferenced using ground-truth markers surveyed via real-time kinematic (RTK) GPS and were then imported into Agisoft Photoscan, a photo-processing software, to generate 3D point clouds and digital elevation maps (DEMs). The georeferenced elevation models were then compared to RTK measurements to evaluate accuracy and precision. Thus far, DEMs derived from UAS photogrammetry show centimeter resolution for renderings of non-vegetated landforms. High-resolution renderings of vegetated and back-barrier regions have proven more difficult due to interstitial wetlands (surface reflectance) and uneven terrain for GPS backpack surveys. In addition to producing high-quality models, UAS photogrammetry has demonstrated to be more time-efficient than traditional mapping methods, making it advantageous for rapid response deployments. This study is part of a larger effort to relate field measurements of storm hydrodynamics to subaerial evidence of geomorphic change to better understand barrier island response to extreme storms.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine
NASA Astrophysics Data System (ADS)
Dai, X.; Xiong, H.; Zheng, X.
2012-07-01
A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.
Andrievskaia, Olga; Tangorra, Erin
2014-12-01
Contamination of rendered animal byproducts with central nervous system tissues (CNST) from animals with bovine spongiform encephalopathy is considered one of the vehicles of disease transmission. Removal from the animal feed chain of CNST originated from cattle of a specified age category, species-labeling of rendered meat products, and testing of rendered products for bovine CNST are tasks associated with the epidemiological control of bovine spongiform encephalopathy. A single-step TaqMan real-time reverse transcriptase (RRT) PCR assay was developed and evaluated for specific detection of bovine glial fibrillary acidic protein (GFAP) mRNA, a biomarker of bovine CNST, in rendered animal by-products. An internal amplification control, mammalian b -actin mRNA, was coamplified in the duplex RRT-PCR assay to monitor amplification efficiency, normalize amplification signals, and avoid false-negative results. The functionality of the GFAP mRNA RRT-PCR was assessed through analysis of laboratory-generated binary mixtures of bovine central nervous system (CNS) and muscle tissues treated under various thermal settings imitating industrial conditions. The assay was able to detect as low as 0.05 % (wt/wt) bovine brain tissue in binary mixtures heat treated at 110 to 130°C for 20 to 60 min. Further evaluation of the GFAP mRNA RRT-PCR assay involved samples of industrial rendered products of various species origin and composition obtained from commercial sources and rendering plants. Low amounts of bovine GFAP mRNA were detected in several bovine-rendered products, which was in agreement with declared species composition. An accurate estimation of CNS tissue content in industrial-rendered products was complicated due to a wide range of temperature and time settings in rendering protocols. Nevertheless, the GFAP mRNA RRT-PCR assay may be considered for bovine CNS tissue detection in rendered products in combination with other available tools (for example, animal age verification) in inspection programs.
Towards a Three-Dimensional Near-Real Time Cloud Product for Aviation Safety and Weather Diagnoses
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Nguyen, Louis; Palikonda, Rabindra; Spangeberg, Douglas; Nordeen, Michele L.; Yi, Yu-Hong; Ayers, J. Kirk
2004-01-01
Satellite data have long been used for determining the extent of cloud cover and for estimating the properties at the cloud tops. The derived properties can also be used to estimate aircraft icing potential to improve the safety of air traffic in the region. Currently, cloud properties and icing potential are derived in near-real time over the United States of America (USA) from the Geostationary Operational Environmental Satellite GOES) imagers at 75 W and 135 W. Traditionally, the results have been given in two dimensions because of the lack of knowledge about the vertical extent of clouds and the occurrence of overlapping clouds. Aircraft fly in a three-dimensional space and require vertical as well as horizontal information about clouds, their intensity, and their potential for icing. To improve the vertical component of the derived cloud and icing parameters, this paper explores various methods and datasets for filling in the three-dimensional space over the USA with cloud water.
DspaceOgreTerrain 3D Terrain Visualization Tool
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.
2012-01-01
DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data
NASA Astrophysics Data System (ADS)
Evans, J. D.; Valente, E. G.
2010-12-01
We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sensors. A scalable architecture based on cloud computing ensures cost-effective, real-time processing and delivery of NPP and other data. Access via standard Web services maximizes its interoperability and usefulness.
Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.
Optical fibre multi-parameter sensing with secure cloud based signal capture and processing
NASA Astrophysics Data System (ADS)
Newe, Thomas; O'Connell, Eoin; Meere, Damien; Yuan, Hongwei; Leen, Gabriel; O'Keeffe, Sinead; Lewis, Elfed
2016-05-01
Recent advancements in cloud computing technologies in the context of optical and optical fibre based systems are reported. The proliferation of real time and multi-channel based sensor systems represents significant growth in data volume. This coupled with a growing need for security presents many challenges and presents a huge opportunity for an evolutionary step in the widespread application of these sensing technologies. A tiered infrastructural system approach is adopted that is designed to facilitate the delivery of Optical Fibre-based "SENsing as a Service- SENaaS". Within this infrastructure, novel optical sensing platforms, deployed within different environments, are interfaced with a Cloud-based backbone infrastructure which facilitates the secure collection, storage and analysis of real-time data. Feedback systems, which harness this data to affect a change within the monitored location/environment/condition, are also discussed. The cloud based system presented here can also be used with chemical and physical sensors that require real-time data analysis, processing and feedback.
Virtual Acoustics: Evaluation of Psychoacoustic Parameters
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.
Near-Real-Time Cloud Auditing for Rapid Response
2013-10-01
cloud auditing , which provides timely evaluation results and rapid response, is the key to assuring the cloud. In this paper, we discuss security and...providers with possible automation of the audit , assertion, assessment, and assurance of their services. The Cloud Security Alliance (CSA [15]) was formed...monitoring tools, research literature, standards, and other resources related to IA (Information Assurance ) metrics and IT auditing . In the following
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sawant, Amit
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347
High-power graphic computers for visual simulation: a real-time--rendering revolution
NASA Technical Reports Server (NTRS)
Kaiser, M. K.
1996-01-01
Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Smith, William L., Jr.; Bedka, Kristopher M.; Nguyen, Louis; Palikonda, Rabindra; Hong, Gang; Trepte, Qing Z.; Chee, Thad; Scarino, Benjamin; Spangenberg, Douglas A.;
2014-01-01
Cloud properties determined from satellite imager radiances provide a valuable source of information for nowcasting and weather forecasting. In recent years, it has been shown that assimilation of cloud top temperature, optical depth, and total water path can increase the accuracies of weather analyses and forecasts. Aircraft icing conditions can be accurately diagnosed in near--real time (NRT) retrievals of cloud effective particle size, phase, and water path, providing valuable data for pilots. NRT retrievals of surface skin temperature can also be assimilated in numerical weather prediction models to provide more accurate representations of solar heating and longwave cooling at the surface, where convective initiation. These and other applications are being exploited more frequently as the value of NRT cloud data become recognized. At NASA Langley, cloud properties and surface skin temperature are being retrieved in near--real time globally from both geostationary (GEO) and low--earth orbiting (LEO) satellite imagers for weather model assimilation and nowcasting for hazards such as aircraft icing. Cloud data from GEO satellites over North America are disseminated through NCEP, while those data and global LEO and GEO retrievals are disseminated from a Langley website. This paper presents an overview of the various available datasets, provides examples of their application, and discusses the use of the various datasets downstream. Future challenges and areas of improvement are also presented.
NASA Astrophysics Data System (ADS)
Minnis, P.; Smith, W., Jr.; Bedka, K. M.; Nguyen, L.; Palikonda, R.; Hong, G.; Trepte, Q.; Chee, T.; Scarino, B. R.; Spangenberg, D.; Sun-Mack, S.; Fleeger, C.; Ayers, J. K.; Chang, F. L.; Heck, P. W.
2014-12-01
Cloud properties determined from satellite imager radiances provide a valuable source of information for nowcasting and weather forecasting. In recent years, it has been shown that assimilation of cloud top temperature, optical depth, and total water path can increase the accuracies of weather analyses and forecasts. Aircraft icing conditions can be accurately diagnosed in near-real time (NRT) retrievals of cloud effective particle size, phase, and water path, providing valuable data for pilots. NRT retrievals of surface skin temperature can also be assimilated in numerical weather prediction models to provide more accurate representations of solar heating and longwave cooling at the surface, where convective initiation. These and other applications are being exploited more frequently as the value of NRT cloud data become recognized. At NASA Langley, cloud properties and surface skin temperature are being retrieved in near-real time globally from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for weather model assimilation and nowcasting for hazards such as aircraft icing. Cloud data from GEO satellites over North America are disseminated through NCEP, while those data and global LEO and GEO retrievals are disseminated from a Langley website. This paper presents an overview of the various available datasets, provides examples of their application, and discusses the use of the various datasets downstream. Future challenges and areas of improvement are also presented.
Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model
NASA Astrophysics Data System (ADS)
Wang, Yifan; Li, Weiran; Zhu, Qing
2018-04-01
This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.
Machine Learning for Flood Prediction in Google Earth Engine
NASA Astrophysics Data System (ADS)
Kuhn, C.; Tellman, B.; Max, S. A.; Schwarz, B.
2015-12-01
With the increasing availability of high-resolution satellite imagery, dynamic flood mapping in near real time is becoming a reachable goal for decision-makers. This talk describes a newly developed framework for predicting biophysical flood vulnerability using public data, cloud computing and machine learning. Our objective is to define an approach to flood inundation modeling using statistical learning methods deployed in a cloud-based computing platform. Traditionally, static flood extent maps grounded in physically based hydrologic models can require hours of human expertise to construct at significant financial cost. In addition, desktop modeling software and limited local server storage can impose restraints on the size and resolution of input datasets. Data-driven, cloud-based processing holds promise for predictive watershed modeling at a wide range of spatio-temporal scales. However, these benefits come with constraints. In particular, parallel computing limits a modeler's ability to simulate the flow of water across a landscape, rendering traditional routing algorithms unusable in this platform. Our project pushes these limits by testing the performance of two machine learning algorithms, Support Vector Machine (SVM) and Random Forests, at predicting flood extent. Constructed in Google Earth Engine, the model mines a suite of publicly available satellite imagery layers to use as algorithm inputs. Results are cross-validated using MODIS-based flood maps created using the Dartmouth Flood Observatory detection algorithm. Model uncertainty highlights the difficulty of deploying unbalanced training data sets based on rare extreme events.
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.
Wang, Lujia; Liu, Ming; Meng, Max Q-H
2017-02-01
Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.
Is There Computer Graphics after Multimedia?
ERIC Educational Resources Information Center
Booth, Kellogg S.
Computer graphics has been driven by the desire to generate real-time imagery subject to constraints imposed by the human visual system. The future of computer graphics, when off-the-shelf systems have full multimedia capability and when standard computing engines render imagery faster than real-time, remains to be seen. A dedicated pipeline for…
Stepping Into Science Data: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.
2017-12-01
Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.
Knowledge Reasoning with Semantic Data for Real-Time Data Processing in Smart Factory
Wang, Shiyong; Li, Di; Liu, Chengliang
2018-01-01
The application of high-bandwidth networks and cloud computing in manufacturing systems will be followed by mass data. Industrial data analysis plays important roles in condition monitoring, performance optimization, flexibility, and transparency of the manufacturing system. However, the currently existing architectures are mainly for offline data analysis, not suitable for real-time data processing. In this paper, we first define the smart factory as a cloud-assisted and self-organized manufacturing system in which physical entities such as machines, conveyors, and products organize production through intelligent negotiation and the cloud supervises this self-organized process for fault detection and troubleshooting based on data analysis. Then, we propose a scheme to integrate knowledge reasoning and semantic data where the reasoning engine processes the ontology model with real time semantic data coming from the production process. Based on these ideas, we build a benchmarking system for smart candy packing application that supports direct consumer customization and flexible hybrid production, and the data are collected and processed in real time for fault diagnosis and statistical analysis. PMID:29415444
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
Computing and Visualizing Reachable Volumes for Maneuvering Satellites
NASA Astrophysics Data System (ADS)
Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.
2011-09-01
Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.
ProteinShader: illustrative rendering of macromolecules
Weber, Joseph R
2009-01-01
Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-02-12
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.
Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204
High-quality slab-based intermixing method for fusion rendering of multiple medical objects.
Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil
2016-01-01
The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Near-Real-Time Satellite Cloud Products for Icing Detection and Aviation Weather over the USA
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Smith, William L., Jr.; Nguyen, Louis; Murray, J. J.; Heck, Patrick W.; Khaiyer, Mandana M.
2003-01-01
A set of physically based retrieval algorithms has been developed to derive from multispectral satellite imagery a variety of cloud properties that can be used to diagnose icing conditions when upper-level clouds are absent. The algorithms are being applied in near-real time to the Geostationary Operational Environmental Satellite (GOES) data over Florida, the Southern Great Plains, and the midwestern USA. The products are available in image and digital formats on the world-wide web. The analysis system is being upgraded to analyze GOES data over the CONUS. Validation, 24-hour processing, and operational issues are discussed.
NASA Astrophysics Data System (ADS)
Yorks, J. E.; McGill, M. J.; Nowottnick, E. P.
2015-12-01
Plumes from hazardous events, such as ash from volcanic eruptions and smoke from wildfires, can have a profound impact on the climate system, human health and the economy. Global aerosol transport models are very useful for tracking hazardous plumes and predicting the transport of these plumes. However aerosol vertical distributions and optical properties are a major weakness of global aerosol transport models, yet a key component of tracking and forecasting smoke and ash. The Cloud-Aerosol Transport System (CATS) is an elastic backscatter lidar designed to provide vertical profiles of clouds and aerosols while also demonstrating new in-space technologies for future Earth Science missions. CATS has been operating on the Japanese Experiment Module - Exposed Facility (JEM-EF) of the International Space Station (ISS) since early February 2015. The ISS orbit provides more comprehensive coverage of the tropics and mid-latitudes than sun-synchronous orbiting sensors, with nearly a three-day repeat cycle. The ISS orbit also provides CATS with excellent coverage over the primary aerosol transport tracks, mid-latitude storm tracks, and tropical convection. Data from CATS is used to derive properties of clouds and aerosols including: layer height, layer thickness, backscatter, optical depth, extinction, and depolarization-based discrimination of particle type. The measurements of atmospheric clouds and aerosols provided by the CATS payload have demonstrated several science benefits. CATS provides near-real-time observations of cloud and aerosol vertical distributions that can be used as inputs to global models. The infrastructure of the ISS allows CATS data to be captured, transmitted, and received at the CATS ground station within several minutes of data collection. The CATS backscatter and vertical feature mask are part of a customized near real time (NRT) product that the CATS processing team produces within 6 hours of collection. The continuous near real time CATS data availability is an extraordinary capability and permits vertical profiles of aerosols to flow directly into any aerosol transport model.
Progress in Near Real-Time Volcanic Cloud Observations Using Satellite UV Instruments
NASA Astrophysics Data System (ADS)
Krotkov, N. A.; Yang, K.; Vicente, G.; Hughes, E. J.; Carn, S. A.; Krueger, A. J.
2011-12-01
Volcanic clouds from explosive eruptions can wreak havoc in many parts of the world, as exemplified by the 2010 eruption at the Eyjafjöll volcano in Iceland, which caused widespread disruption to air traffic and resulted in economic impacts across the globe. A suite of satellite-based systems offer the most effective means to monitor active volcanoes and to track the movement of volcanic clouds globally, providing critical information for aviation hazard mitigation. Satellite UV sensors, as part of this suite, have a long history of making unique near-real time (NRT) measurements of sulfur dioxide (SO2) and ash (aerosol Index) in volcanic clouds to supplement operational volcanic ash monitoring. Recently a NASA application project has shown that the use of near real-time (NRT,i.e., not older than 3 h) Aura/OMI satellite data produces a marked improvement in volcanic cloud detection using SO2 combined with Aerosol Index (AI) as a marker for ash. An operational online NRT OMI AI and SO2 image and data product distribution system was developed in collaboration with the NOAA Office of Satellite Data Processing and Distribution. Automated volcanic eruption alarms, and the production of volcanic cloud subsets for multiple regions are provided through the NOAA website. The data provide valuable information in support of the U.S. Federal Aviation Administration goal of a safe and efficient National Air Space. In this presentation, we will highlight the advantages of UV techniques and describe the advances in volcanic SO2 plume height estimation and enhanced volcanic ash detection using hyper-spectral UV measurements, illustrated with Aura/OMI observations of recent eruptions. We will share our plan to provide near-real-time volcanic cloud monitoring service using the Ozone Mapping and Profiler Suite (OMPS) on the Joint Polar Satellite System (JPSS).
Cloud-ECG for real time ECG monitoring and analysis.
Xia, Henian; Asif, Irfan; Zhao, Xiaopeng
2013-06-01
Recent advances in mobile technology and cloud computing have inspired numerous designs of cloud-based health care services and devices. Within the cloud system, medical data can be collected and transmitted automatically to medical professionals from anywhere and feedback can be returned to patients through the network. In this article, we developed a cloud-based system for clients with mobile devices or web browsers. Specially, we aim to address the issues regarding the usefulness of the ECG data collected from patients themselves. Algorithms for ECG enhancement, ECG quality evaluation and ECG parameters extraction were implemented in the system. The system was demonstrated by a use case, in which ECG data was uploaded to the web server from a mobile phone at a certain frequency and analysis was performed in real time using the server. The system has been proven to be functional, accurate and efficient. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hughes, E. J.; Yorks, J.; Krotkov, N. A.; da Silva, A. M.; Mcgill, M.
2016-01-01
An eruption of Italian volcano Mount Etna on 3 December 2015 produced fast-moving sulfur dioxide (SO2) and sulfate aerosol clouds that traveled across Asia and the Pacific Ocean, reaching North America in just 5 days. The Ozone Profiler and Mapping Suite's Nadir Mapping UV spectrometer aboard the U.S. National Polar-orbiting Partnership satellite observed the horizontal transport of the SO2 cloud. Vertical profiles of the colocated volcanic sulfate aerosols were observed between 11.5 and 13.5 km by the new Cloud Aerosol Transport System (CATS) space-based lidar aboard the International Space Station. Backward trajectory analysis estimates the SO2 cloud altitude at 7-12 km. Eulerian model simulations of the SO2 cloud constrained by CATS measurements produced more accurate dispersion patterns compared to those initialized with the back trajectory height estimate. The near-real-time data processing capabilities of CATS are unique, and this work demonstrates the use of these observations to monitor and model volcanic clouds.
NASA Technical Reports Server (NTRS)
Hughes, E. J.; Yorks, J.; Krotkov, N. A.; Da Silva, A. M.; McGill, M.
2016-01-01
An eruption of Italian volcano Mount Etna on 3 December 2015 produced fast-moving sulfur dioxide (SO2) and sulfate aerosol clouds that traveled across Asia and the Pacific Ocean, reaching North America in just 5days. The Ozone Profiler and Mapping Suite's Nadir Mapping UV spectrometer aboard the U.S. National Polar-orbiting Partnership satellite observed the horizontal transport of the SO2 cloud. Vertical profiles of the colocated volcanic sulfate aerosols were observed between 11.5 and 13.5 km by the new Cloud Aerosol Transport System (CATS) space-based lidar aboard the International Space Station. Backward trajectory analysis estimates the SO2 cloud altitude at 7-12 km. Eulerian model simulations of the SO2 cloud constrained by CATS measurements produced more accurate dispersion patterns compared to those initialized with the back trajectory height estimate. The near-real-time data processing capabilities of CATS are unique, and this work demonstrates the use of these observations to monitor and model volcanic clouds.
Analysis of the new health management based on health internet of things and cloud computing
NASA Astrophysics Data System (ADS)
Liu, Shaogang
2018-05-01
With the development and application of Internet of things and cloud technology in the medical field, it provides a higher level of exploration space for human health management. By analyzing the Internet of things technology and cloud technology, this paper studies a new form of health management system which conforms to the current social and technical level, and explores its system architecture, system characteristics and application. The new health management platform for networking and cloud can achieve the real-time monitoring and prediction of human health through a variety of sensors and wireless networks based on information and can be transmitted to the monitoring system, and then through the software analysis model, and gives the targeted prevention and treatment measures, to achieve real-time, intelligent health management.
Looking Down Through the Clouds – Optical Attenuation through Real-Time Clouds
NASA Astrophysics Data System (ADS)
Burley, J.; Lazarewicz, A.; Dean, D.; Heath, N.
Detecting and identifying nuclear explosions in the atmosphere and on the surface of the Earth is critical for the Air Force Technical Applications Center (AFTAC) treaty monitoring mission. Optical signals, from surface or atmospheric nuclear explosions detected by satellite sensors, are attenuated by the atmosphere and clouds. Clouds present a particularly complex challenge as they cover up to seventy percent of the earth's surface. Moreover, their highly variable and diverse nature requires physics-based modeling. Determining the attenuation for each optical ray-path is uniquely dependent on the source geolocation, the specific optical transmission characteristics along that ray path, and sensor detection capabilities. This research details a collaborative AFTAC and AFIT effort to fuse worldwide weather data, from a variety of sources, to provide near-real-time profiles of atmospheric and cloud conditions and the resulting radiative transfer analysis for virtually any wavelength(s) of interest from source to satellite. AFIT has developed a means to model global clouds using the U.S. Air Force’s World Wide Merged Cloud Analysis (WWMCA) cloud data in a new toolset that enables radiance calculations through clouds from UV to RF wavelengths.
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.
Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud
Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-02-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-01-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399
Foliage penetration by using 4-D point cloud data
NASA Astrophysics Data System (ADS)
Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.
2012-06-01
Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.
NASA Astrophysics Data System (ADS)
Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin
2018-03-01
Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.
Space-time light field rendering.
Wang, Huamin; Sun, Mingxuan; Yang, Ruigang
2007-01-01
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.
NASA-Langley Web-Based Operational Real-time Cloud Retrieval Products from Geostationary Satellites
NASA Technical Reports Server (NTRS)
Palikonda, Rabindra; Minnis, Patrick; Spangenberg, Douglas A.; Khaiyer, Mandana M.; Nordeen, Michele L.; Ayers, Jeffrey K.; Nguyen, Louis; Yi, Yuhong; Chan, P. K.; Trepte, Qing Z.;
2006-01-01
At NASA Langley Research Center (LaRC), radiances from multiple satellites are analyzed in near real-time to produce cloud products over many regions on the globe. These data are valuable for many applications such as diagnosing aircraft icing conditions and model validation and assimilation. This paper presents an overview of the multiple products available, summarizes the content of the online database, and details web-based satellite browsers and tools to access satellite imagery and products.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
Multilayered nonuniform sampling for three-dimensional scene representation
NASA Astrophysics Data System (ADS)
Lin, Huei-Yung; Xiao, Yu-Hua; Chen, Bo-Ren
2015-09-01
The representation of a three-dimensional (3-D) scene is essential in multiview imaging technologies. We present a unified geometry and texture representation based on global resampling of the scene. A layered data map representation with a distance-dependent nonuniform sampling strategy is proposed. It is capable of increasing the details of the 3-D structure locally and is compact in size. The 3-D point cloud obtained from the multilayered data map is used for view rendering. For any given viewpoint, image synthesis with different levels of detail is carried out using the quadtree-based nonuniformly sampled 3-D data points. Experimental results are presented using the 3-D models of reconstructed real objects.
Comparison of Cloud Properties from CALIPSO-CloudSat and Geostationary Satellite Data
NASA Technical Reports Server (NTRS)
Nguyen, L.; Minnis, P.; Chang, F.; Winker, D.; Sun-Mack, S.; Spangenberg, D.; Austin, R.
2007-01-01
Cloud properties are being derived in near-real time from geostationary satellite imager data for a variety of weather and climate applications and research. Assessment of the uncertainties in each of the derived cloud parameters is essential for confident use of the products. Determination of cloud amount, cloud top height, and cloud layering is especially important for using these real -time products for applications such as aircraft icing condition diagnosis and numerical weather prediction model assimilation. Furthermore, the distribution of clouds as a function of altitude has become a central component of efforts to evaluate climate model cloud simulations. Validation of those parameters has been difficult except over limited areas where ground-based active sensors, such as cloud radars or lidars, have been available on a regular basis. Retrievals of cloud properties are sensitive to the surface background, time of day, and the clouds themselves. Thus, it is essential to assess the geostationary satellite retrievals over a variety of locations. The availability of cloud radar data from CloudSat and lidar data from CALIPSO make it possible to perform those assessments over each geostationary domain at 0130 and 1330 LT. In this paper, CloudSat and CALIPSO data are matched with contemporaneous Geostationary Operational Environmental Satellite (GOES), Multi-functional Transport Satellite (MTSAT), and Meteosat-8 data. Unlike comparisons with cloud products derived from A-Train imagers, this study considers comparisons of nadir active sensor data with off-nadir retrievals. These matched data are used to determine the uncertainties in cloud-top heights and cloud amounts derived from the geostationary satellite data using the Clouds and the Earth s Radiant Energy System (CERES) cloud retrieval algorithms. The CERES multi-layer cloud detection method is also evaluated to determine its accuracy and limitations in the off-nadir mode. The results will be useful for constraining the use of the passive retrieval data in models and for improving the accuracy of the retrievals.
Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I
NASA Astrophysics Data System (ADS)
Gonthier, David L.; Veron, Harry
1998-04-01
A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.
The Real-Time Monitoring Service Platform for Land Supervision Based on Cloud Integration
NASA Astrophysics Data System (ADS)
Sun, J.; Mao, M.; Xiang, H.; Wang, G.; Liang, Y.
2018-04-01
Remote sensing monitoring has become the important means for land and resources departments to strengthen supervision. Aiming at the problems of low monitoring frequency and poor data currency in current remote sensing monitoring, this paper researched and developed the cloud-integrated real-time monitoring service platform for land supervision which enhanced the monitoring frequency by acquiring the domestic satellite image data overall and accelerated the remote sensing image data processing efficiency by exploiting the intelligent dynamic processing technology of multi-source images. Through the pilot application in Jinan Bureau of State Land Supervision, it has been proved that the real-time monitoring technical method for land supervision is feasible. In addition, the functions of real-time monitoring and early warning are carried out on illegal land use, permanent basic farmland protection and boundary breakthrough in urban development. The application has achieved remarkable results.
Road Risk Modeling and Cloud-Aided Safety-Based Route Planning.
Li, Zhaojian; Kolmanovsky, Ilya; Atkins, Ella; Lu, Jianbo; Filev, Dimitar P; Michelini, John
2016-11-01
This paper presents a safety-based route planner that exploits vehicle-to-cloud-to-vehicle (V2C2V) connectivity. Time and road risk index (RRI) are considered as metrics to be balanced based on user preference. To evaluate road segment risk, a road and accident database from the highway safety information system is mined with a hybrid neural network model to predict RRI. Real-time factors such as time of day, day of the week, and weather are included as correction factors to the static RRI prediction. With real-time RRI and expected travel time, route planning is formulated as a multiobjective network flow problem and further reduced to a mixed-integer programming problem. A V2C2V implementation of our safety-based route planning approach is proposed to facilitate access to real-time information and computing resources. A real-world case study, route planning through the city of Columbus, Ohio, is presented. Several scenarios illustrate how the "best" route can be adjusted to favor time versus safety metrics.
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.
2015-12-01
According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.
A Cloud-Based Infrastructure for Near-Real-Time Processing and Dissemination of NPP Data
NASA Astrophysics Data System (ADS)
Evans, J. D.; Valente, E. G.; Chettri, S. S.
2011-12-01
We are building a scalable cloud-based infrastructure for generating and disseminating near-real-time data products from a variety of geospatial and meteorological data sources, including the new National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP). Our approach relies on linking Direct Broadcast and other data streams to a suite of scientific algorithms coordinated by NASA's International Polar-Orbiter Processing Package (IPOPP). The resulting data products are directly accessible to a wide variety of end-user applications, via industry-standard protocols such as OGC Web Services, Unidata Local Data Manager, or OPeNDAP, using open source software components. The processing chain employs on-demand computing resources from Amazon.com's Elastic Compute Cloud and NASA's Nebula cloud services. Our current prototype targets short-term weather forecasting, in collaboration with NASA's Short-term Prediction Research and Transition (SPoRT) program and the National Weather Service. Direct Broadcast is especially crucial for NPP, whose current ground segment is unlikely to deliver data quickly enough for short-term weather forecasters and other near-real-time users. Direct Broadcast also allows full local control over data handling, from the receiving antenna to end-user applications: this provides opportunities to streamline processes for data ingest, processing, and dissemination, and thus to make interpreted data products (Environmental Data Records) available to practitioners within minutes of data capture at the sensor. Cloud computing lets us grow and shrink computing resources to meet large and rapid fluctuations in data availability (twice daily for polar orbiters) - and similarly large fluctuations in demand from our target (near-real-time) users. This offers a compelling business case for cloud computing: the processing or dissemination systems can grow arbitrarily large to sustain near-real time data access despite surges in data volumes or user demand, but that computing capacity (and hourly costs) can be dropped almost instantly once the surge passes. Cloud computing also allows low-risk experimentation with a variety of machine architectures (processor types; bandwidth, memory, and storage capacities, etc.) and of system configurations (including massively parallel computing patterns). Finally, our service-based approach (in which user applications invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored products on demand. To maximize the usefulness and impact of our technology, we have emphasized open, industry-standard software interfaces. We are also using and developing open source software to facilitate the widespread adoption of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sources.
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.
2017-05-01
While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venner, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; O'Brien, Raymond
2015-01-01
Cloud computing capabilities have rapidly expanded within the private sector, offering new opportunities for meteorological applications. Collaborations between NASA Marshall, NASA Ames, and contractor partners led to evaluations of private (NASA) and public (Amazon) resources for executing short-term NWP systems. Activities helped the Marshall team further understand cloud capabilities, and benchmark use of cloud resources for NWP and other applications
Person detection and tracking with a 360° lidar system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2017-10-01
Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
NASA Astrophysics Data System (ADS)
Jumelet, Julien; Bekki, Slimane; Keckhut, Philippe
2017-04-01
We present a high-resolution isentropic microphysical transport model dedicated to stratospheric aerosols and clouds. The model is based on the MIMOSA model (Modélisation Isentrope du transport Méso-échelle de l'Ozone Stratosphérique par Advection) and adds several modules: a fully explicit size-resolving microphysical scheme to transport aerosol granulometry as passive tracers and an optical module, able to calculate the scattering and extinction properties of particles at given wavelengths. Originally designed for polar stratospheric clouds (composed of sulfuric acid, nitric acid and water vapor), the model is fully capable of rendering the structure and properties of volcanic plumes at the finer scales, assuming complete SO2 oxydation. This link between microphysics and optics also enables the model to take advantage of spaceborne lidar data (i.e. CALIOP) by calculating the 532nm aerosol backscatter coefficient, taking it as the control variable to provide microphysical constraints during the transport. This methodology has been applied to simulate volcanic plumes during relatively recent volcanic eruptions, from the 2010 Merapi to the 2015 Calbuco eruption. Optical calculations are also used for direct comparisons between the model and groundbased lidar stations for validation as well as characterization purposes. We will present the model and the simulation results, along with a focus on the sensitivity to initialisation parameters, considering the need for quasi-real time modelling and forecasts in the case of future eruptions.
Applications for Near-Real Time Satellite Cloud and Radiation Products
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Palikonda, Rabindra; Chee, Thad L.; Bedka, Kristopher M.; Smith, W.; Ayers, Jeffrey K.; Benjamin, Stanley; Chang, F.-L.; Nguyen, Louis; Norris, Peter;
2012-01-01
At NASA Langley Research Center, a variety of cloud, clear-sky, and radiation products are being derived at different scales from regional to global using geostationary satellite (GEOSat) and lower Earth-orbiting (LEOSat) imager data. With growing availability, these products are becoming increasingly valuable for weather forecasting and nowcasting. These products include, but are not limited to, cloud-top and base heights, cloud water path and particle size, cloud temperature and phase, surface skin temperature and albedo, and top-of-atmosphere radiation budget. Some of these data products are currently assimilated operationally in a numerical weather prediction model. Others are used unofficially for nowcasting, while testing is underway for other applications. These applications include the use of cloud water path in an NWP model, cloud optical depth for detecting convective initiation in cirrus-filled skies, and aircraft icing condition diagnoses among others. This paper briefly describes a currently operating system that analyzes data from GEOSats around the globe (GOES, Meteosat, MTSAT, FY-2) and LEOSats (AVHRR and MODIS) and makes the products available in near-real time through a variety of media. Current potential future use of these products is discussed.
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis
NASA Astrophysics Data System (ADS)
Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.
2015-08-01
The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.
Realtime Compositing of Procedural Facade Textures on the Gpu
NASA Astrophysics Data System (ADS)
Krecklau, L.; Kobbelt, L.
2011-09-01
The real time rendering of complex virtual city models has become more important in the last few years for many practical applications like realistic navigation or urban planning. For maximum rendering performance, the complexity of the geometry or textures can be reduced by decreasing the resolution until the data set can fully reside on the memory of the graphics card. This typically results in a low quality of the virtual city model. Alternatively, a streaming algorithm can load the high quality data set from the hard drive. However, this approach requires a large amount of persistent storage providing several gigabytes of static data. We present a system that uses a texture atlas containing atomic tiles like windows, doors or wall patterns, and that combines those elements on-the-fly directly on the graphics card. The presented approach benefits from a sophisticated randomization approach that produces lots of different facades while the grammar description itself remains small. By using a ray casting apporach, we are able to trace through transparent windows revealing procedurally generated rooms which further contributes to the realism of the rendering. The presented method enables real time rendering of city models with a high level of detail for facades while still relying on a small memory footprint.
A new task scheduling algorithm based on value and time for cloud platform
NASA Astrophysics Data System (ADS)
Kuang, Ling; Zhang, Lichen
2017-08-01
Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its characters and advantages.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Efficient visibility encoding for dynamic illumination in direct volume rendering.
Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas
2012-03-01
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
NASA Astrophysics Data System (ADS)
Sudhakar, P.; Sheela, K. Anitha; Ramakrishna Rao, D.; Malladi, Satyanarayana
2016-05-01
In recent years weather modification activities are being pursued in many countries through cloud seeding techniques to facilitate the increased and timely precipitation from the clouds. In order to induce and accelerate the precipitation process clouds are artificially seeded with suitable materials like silver iodide, sodium chloride or other hygroscopic materials. The success of cloud seeding can be predicted with confidence if the precipitation process involving aerosol, the ice water balance, water vapor content and size of the seeding material in relation to aerosol in the cloud is monitored in real time and optimized. A project on the enhancement of rain fall through cloud seeding is being implemented jointly with Kerala State Electricity Board Ltd. Trivandrum, Kerala, India at the catchment areas of the reservoir of one of the Hydro electric projects. The dual polarization lidar is being used to monitor and measure the microphysical properties, the extinction coefficient, size distribution and related parameters of the clouds. The lidar makes use of the Mie, Rayleigh and Raman scattering techniques for the various measurement proposed. The measurements with the dual polarization lidar as above are being carried out in real time to obtain the various parameters during cloud seeding operations. In this paper we present the details of the multi-wavelength dual polarization lidar being used and the methodology to monitor the various cloud parameters involved in the precipitation process. The necessary retrieval algorithms for deriving the microphysical properties of clouds, aerosols characteristics and water vapor profiles are incorporated as a software package working under Lab-view for online and off line analysis. Details on the simulation studies and the theoretical model developed in this regard for the optimization of various parameters are discussed.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.
2016-12-01
The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.
a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight
NASA Astrophysics Data System (ADS)
Yao, C.; Peng, G.; Song, Y.; Duan, M.
2017-09-01
The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
NASA Astrophysics Data System (ADS)
Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer
2018-01-01
The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.
CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds
NASA Astrophysics Data System (ADS)
Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol
The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Visualization assisted by parallel processing
NASA Astrophysics Data System (ADS)
Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.
2011-01-01
This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.
Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul
2017-05-01
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
NASA Technical Reports Server (NTRS)
Palm, Stephen P.; Hlavka, Dennis; Hart, Bill; Welton, E. Judd; Spinhirne, James
2000-01-01
The Geoscience Laser Altimeter System (GLAS) will be placed into orbit in 2001 aboard the Ice, Cloud and Land Elevation Satellite (ICESat). From its nearly polar orbit (94 degree inclination), GLAS will provide continuous global measurements of the vertical distribution of clouds and aerosols while simultaneously providing high accuracy topographic profiling of surface features. During the mission, which is slated to last 3 to 5 years, the data collected by GLAS will be in near-real time to produce level 1 and 2 data products at the NASA GLAS Science Computing Facility (SCF) at Goddard Space Flight Center in Greenbelt, Maryland. The atmospheric products include cloud and aerosol layer heights, planetary boundary layer depth, polar stratospheric clouds and thin cloud and aerosol optical depth. These products will be made available to the science community within days of their creation. The processing algorithms must be robust, adaptive, efficient, and clever enough to run autonomously for the widely varying atmospheric conditions that will be encountered. This paper presents an overview of the GLAS atmospheric data products and briefly discusses the design of the processing algorithms.
Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.
Augmented Reality Comes to Physics
ERIC Educational Resources Information Center
Buesing, Mark; Cook, Michael
2013-01-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as…
D Model Visualization Enhancements in Real-Time Game Engines
NASA Astrophysics Data System (ADS)
Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.
2013-02-01
This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including DirectX 11 support. Real-time tessellation is a technique that can be applied by using such technology. Since the displacement and the resulting geometry are calculated by the GPU, the time-based execution cost of this technique is very low.
Jayapandian, Catherine P; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D; Zhang, Guo-Qiang; Sahoo, Satya S
2013-01-01
Epilepsy is the most common serious neurological disorder affecting 50-60 million persons worldwide. Multi-modal electrophysiological data, such as electroencephalography (EEG) and electrocardiography (EKG), are central to effective patient care and clinical research in epilepsy. Electrophysiological data is an example of clinical "big data" consisting of more than 100 multi-channel signals with recordings from each patient generating 5-10GB of data. Current approaches to store and analyze signal data using standalone tools, such as Nihon Kohden neurology software, are inadequate to meet the growing volume of data and the need for supporting multi-center collaborative studies with real time and interactive access. We introduce the Cloudwave platform in this paper that features a Web-based intuitive signal analysis interface integrated with a Hadoop-based data processing module implemented on clinical data stored in a "private cloud". Cloudwave has been developed as part of the National Institute of Neurological Disorders and Strokes (NINDS) funded multi-center Prevention and Risk Identification of SUDEP Mortality (PRISM) project. The Cloudwave visualization interface provides real-time rendering of multi-modal signals with "montages" for EEG feature characterization over 2TB of patient data generated at the Case University Hospital Epilepsy Monitoring Unit. Results from performance evaluation of the Cloudwave Hadoop data processing module demonstrate one order of magnitude improvement in performance over 77GB of patient data. (Cloudwave project: http://prism.case.edu/prism/index.php/Cloudwave).
Temporally rendered automatic cloud extraction (TRACE) system
NASA Astrophysics Data System (ADS)
Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.
1999-10-01
Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.
Prototype methodology for obtaining cloud seeding guidance from HRRR model data
NASA Astrophysics Data System (ADS)
Dawson, N.; Blestrud, D.; Kunkel, M. L.; Waller, B.; Ceratto, J.
2017-12-01
Weather model data, along with real time observations, are critical to determine whether atmospheric conditions are prime for super-cooled liquid water during cloud seeding operations. Cloud seeding groups can either use operational forecast models, or run their own model on a computer cluster. A custom weather model provides the most flexibility, but is also expensive. For programs with smaller budgets, openly-available operational forecasting models are the de facto method for obtaining forecast data. The new High-Resolution Rapid Refresh (HRRR) model (3 x 3 km grid size), developed by the Earth System Research Laboratory (ESRL), provides hourly model runs with 18 forecast hours per run. While the model cannot be fine-tuned for a specific area or edited to provide cloud-seeding-specific output, model output is openly available on a near-real-time basis. This presentation focuses on a prototype methodology for using HRRR model data to create maps which aid in near-real-time cloud seeding decision making. The R programming language is utilized to run a script on a Windows® desktop/laptop computer either on a schedule (such as every half hour) or manually. The latest HRRR model run is downloaded from NOAA's Operational Model Archive and Distribution System (NOMADS). A GRIB-filter service, provided by NOMADS, is used to obtain surface and mandatory pressure level data for a subset domain which greatly cuts down on the amount of data transfer. Then, a set of criteria, identified by the Idaho Power Atmospheric Science Group, is used to create guidance maps. These criteria include atmospheric stability (lapse rates), dew point depression, air temperature, and wet bulb temperature. The maps highlight potential areas where super-cooled liquid water may exist, reasons as to why cloud seeding should not be attempted, and wind speed at flight level.
Cloud-based NEXRAD Data Processing and Analysis for Hydrologic Applications
NASA Astrophysics Data System (ADS)
Seo, B. C.; Demir, I.; Keem, M.; Goska, R.; Weber, J.; Krajewski, W. F.
2016-12-01
The real-time and full historical archive of NEXRAD Level II data, covering the entire United States from 1991 to present, recently became available on Amazon cloud S3. This provides a new opportunity to rebuild the Hydro-NEXRAD software system that enabled users to access vast amounts of NEXRAD radar data in support of a wide range of research. The system processes basic radar data (Level II) and delivers radar-rainfall products based on the user's custom selection of features such as space and time domain, river basin, rainfall product space and time resolution, and rainfall estimation algorithms. The cloud-based new system can eliminate prior challenges faced by Hydro-NEXRAD data acquisition and processing: (1) temporal and spatial limitation arising from the limited data storage; (2) archive (past) data ingestion and format conversion; and (3) separate data processing flow for the past and real-time Level II data. To enhance massive data processing and computational efficiency, the new system is implemented and tested for the Iowa domain. This pilot study begins by ingesting rainfall metadata and implementing Hydro-NEXRAD capabilities on the cloud using the new polarimetric features, as well as the existing algorithm modules and scripts. The authors address the reliability and feasibility of cloud computation and processing, followed by an assessment of response times from an interactive web-based system.
NASA Astrophysics Data System (ADS)
Seftor, C. J.; Krotkov, N. A.; McPeters, R. D.; Li, J. Y.; Durbin, P. B.
2015-12-01
Near real time (NRT) SO2 and aerosol index (AI) imagery from Aura's Ozone Monitoring Instrument (OMI) has proven invaluable in mitigating the risk posed to air traffic by SO2 and ash clouds from volcanic eruptions. The OMI products, generated as part of NASA's Land, Atmosphere Near real-time Capability for EOS (LANCE) NRT system and available through LANCE and both NOAA's NESDIS and ESA's Support to Aviation Control Service (SACS) portals, are used to monitor the current location of volcanic clouds and to provide input into Volcanic Ash (VA) advisory forecasts. NRT products have recently been developed using data from the Ozone Mapping and Profiler Suite onboard the Suomi NPP platform; they are currently being made available through the SACS portal and will shortly be incorporated into the LANCE NRT system. We will show examples of the use of OMPS NRT SO2 and AI imagery to monitor recent volcanic eruption events. We will also demonstrate the usefulness of OMPS AI imagery to detect and track dust storms and smoke from fires, and how this information can be used to forecast their impact on air quality in areas far removed from their source. Finally, we will show SO2 and AI imagery generated from our OMPS Direct Broadcast data to highlight the capability of our real time system.
Design and implementation of a 3D ocean virtual reality and visualization engine
NASA Astrophysics Data System (ADS)
Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing
2012-12-01
In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.
A numerical cloud model for the support of laboratory experimentation
NASA Technical Reports Server (NTRS)
Hagen, D. E.
1979-01-01
A numerical cloud model is presented which can describe the evolution of a cloud starting from moist aerosol-laden air through the diffusional growth regime. The model is designed for the direct support of cloud chamber laboratory experimentation, i.e., experiment preparation, real-time control and data analysis. In the model the thermodynamics is uncoupled from the droplet growth processes. Analytic solutions for the cloud droplet growth equations are developed which can be applied in most laboratory situations. The model is applied to a variety of representative experiments.
Learning and Design with Online Real-Time Collaboration
ERIC Educational Resources Information Center
Stevenson, Michael; Hedberg, John G.
2013-01-01
This paper explores the use of emerging Cloud technologies that support real-time online collaboration. It considers the extent to which these technologies can be leveraged to develop complex skillsets supporting interaction between multiple learners in online spaces. In a pilot study that closely examines how groups of learners translate two…
Can Real-Time Data Also Be Climate Quality?
NASA Astrophysics Data System (ADS)
Brewer, M.; Wentz, F. J.
2015-12-01
GMI, AMSR-2 and WindSat herald a new era of highly accurate and timely microwave data products. Traditionally, there has been a large divide between real-time and re-analysis data products. What if these completely separate processing systems could be merged? Through advanced modeling and physically based algorithms, Remote Sensing Systems (RSS) has narrowed the gap between real-time and research-quality. Satellite microwave ocean products have proven useful for a wide array of timely Earth science applications. Through cloud SST capabilities have enormously benefited tropical cyclone forecasting and day to day fisheries management, to name a few. Oceanic wind vectors enhance operational safety of shipping and recreational boating. Atmospheric rivers are of import to many human endeavors, as are cloud cover and knowledge of precipitation events. Some activities benefit from both climate and real-time operational data used in conjunction. RSS has been consistently improving microwave Earth Science Data Records (ESDRs) for several decades, while making near real-time data publicly available for semi-operational use. These data streams have often been produced in 2 stages: near real-time, followed by research quality final files. Over the years, we have seen this time delay shrink from months or weeks to mere hours. As well, we have seen the quality of near real-time data improve to the point where the distinction starts to blur. We continue to work towards better and faster RFI filtering, adaptive algorithms and improved real-time validation statistics for earlier detection of problems. Can it be possible to produce climate quality data in real-time, and what would the advantages be? We will try to answer these questions…
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
NASA Astrophysics Data System (ADS)
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).
DC-8 Scanning Lidar Characterization of Aircraft Contrails and Cirrus Clouds
NASA Technical Reports Server (NTRS)
Uthe, Edward E.; Nielsen, Norman B.; Oseberg, Terje E.
1998-01-01
An angular-scanning large-aperture (36 cm) backscatter lidar was developed and deployed on the NASA DC-8 research aircraft as part of the SUCCESS (Subsonic Aircraft: Contrail and Cloud Effects Special Study) program. The lidar viewing direction could be scanned continuously during aircraft flight from vertically upward to forward to vertically downward, or the viewing could be at fixed angles. Real-time pictorial displays generated from the lidar signatures were broadcast on the DC-8 video network and used to locate clouds and contrails above, ahead of, and below the DC-8 to depict their spatial structure and to help select DC-8 altitudes for achieving optimum sampling by onboard in situ sensors. Several lidar receiver systems and real-time data displays were evaluated to help extend in situ data into vertical dimensions and to help establish possible lidar configurations and applications on future missions. Digital lidar signatures were recorded on 8 mm Exabyte tape and generated real-time displays were recorded on 8mm video tape. The digital records were transcribed in a common format to compact disks to facilitate data analysis and delivery to SUCCESS participants. Data selected from the real-time display video recordings were processed for publication-quality displays incorporating several standard lidar data corrections. Data examples are presented that illustrate: (1) correlation with particulate, gas, and radiometric measurements made by onboard sensors, (2) discrimination and identification between contrails observed by onboard sensors, (3) high-altitude (13 km) scattering layer that exhibits greatly enhanced vertical backscatter relative to off-vertical backscatter, and (4) mapping of vertical distributions of individual precipitating ice crystals and their capture by cloud layers. An angular scan plotting program was developed that accounts for DC-8 pitch and velocity.
Development of lidar sensor for cloud-based measurements during convective conditions
NASA Astrophysics Data System (ADS)
Vishnu, R.; Bhavani Kumar, Y.; Rao, T. Narayana; Nair, Anish Kumar M.; Jayaraman, A.
2016-05-01
Atmospheric convection is a natural phenomena associated with heat transport. Convection is strong during daylight periods and rigorous in summer months. Severe ground heating associated with strong winds experienced during these periods. Tropics are considered as the source regions for strong convection. Formation of thunder storm clouds is common during this period. Location of cloud base and its associated dynamics is important to understand the influence of convection on the atmosphere. Lidars are sensitive to Mie scattering and are the suitable instruments for locating clouds in the atmosphere than instruments utilizing the radio frequency spectrum. Thunder storm clouds are composed of hydrometers and strongly scatter the laser light. Recently, a lidar technique was developed at National Atmospheric Research Laboratory (NARL), a Department of Space (DOS) unit, located at Gadanki near Tirupati. The lidar technique employs slant path operation and provides high resolution measurements on cloud base location in real-time. The laser based remote sensing technique allows measurement of atmosphere for every second at 7.5 m range resolution. The high resolution data permits assessment of updrafts at the cloud base. The lidar also provides real-time convective boundary layer height using aerosols as the tracers of atmospheric dynamics. The developed lidar sensor is planned for up-gradation with scanning facility to understand the cloud dynamics in the spatial direction. In this presentation, we present the lidar sensor technology and utilization of its technology for high resolution cloud base measurements during convective conditions over lidar site, Gadanki.
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Use of cloud computing technology in natural hazard assessment and emergency management
NASA Astrophysics Data System (ADS)
Webley, P. W.; Dehn, J.
2015-12-01
During a natural hazard event, the most up-to-date data needs to be in the hands of those on the front line. Decision support system tools can be developed to provide access to pre-made outputs to quickly assess the hazard and potential risk. However, with the ever growing availability of new satellite data as well as ground and airborne data generated in real-time there is a need to analyze the large volumes of data in an easy-to-access and effective environment. With the growth in the use of cloud computing, where the analysis and visualization system can grow with the needs of the user, then these facilities can used to provide this real-time analysis. Think of a central command center uploading the data to the cloud compute system and then those researchers in-the-field connecting to a web-based tool to view the newly acquired data. New data can be added by any user and then viewed instantly by anyone else in the organization through the cloud computing interface. This provides the ideal tool for collaborative data analysis, hazard assessment and decision making. We present the rationale for developing a cloud computing systems and illustrate how this tool can be developed for use in real-time environments. Users would have access to an interactive online image analysis tool without the need for specific remote sensing software on their local system therefore increasing their understanding of the ongoing hazard and mitigate its impact on the surrounding region.
A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.
2012-04-01
Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking can benefit largely from the hardware parallelism provided by the cloud deployment. The resulting output, post-stack section, coherence, and NMO-velocity panels are used to generate a smooth migration-velocity model. Residual static corrections are calculated as a by-product of the stack and can be applied iteratively. As a final step, a time migrated subsurface image is obtained by a parallelized Kirchhoff time migration scheme. Processing can be done step-by-step or using a graphical workflow editor that can launch a series of pipelined tasks. The status of the submitted jobs is monitored by a dedicated service. All results are stored in project directories, where they can be downloaded of viewed directly in the browser. Currently, the portal has access to three research clusters having a total number of 70 nodes with 4 cores each. They are shared with four other cloud-computing applications bundled within the GRIDA3 project. To demonstrate the functionality of our "seismic cloud lab", we will present results obtained for three different types of data, all taken from hydrogeophysical studies: (1) a seismic reflection data set, made of compressional waves from explosive sources, recorded in Muravera, Sardinia; (2) a shear-wave data set from, Sardinia; (3) a multi-offset Ground-Penetrating-Radar data set from Larreule, France. The presented work was funded by the government of the Autonomous Region of Sardinia and by the Italian Ministry of Research and Education.
Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon
2016-01-01
In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.
Lightning Tracking Tool for Assessment of Total Cloud Lightning within AWIPS II
NASA Technical Reports Server (NTRS)
Burks, Jason E.; Stano, Geoffrey T.; Sperow, Ken
2014-01-01
Total lightning (intra-cloud and cloud-to-ground) has been widely researched and shown to be a valuable tool to aid real-time warning forecasters in the assessment of severe weather potential of convective storms. The trend of total lightning has been related to the strength of a storm's updraft. Therefore a rapid increase in total lightning signifies the strengthening of the parent thunderstorm. The assessment of severe weather potential occurs in a time limited environment and therefore constrains the use of total lightning. A tool has been developed at NASA's Short-term Prediction Research and Transition (SPoRT) Center to assist in quickly analyzing the total lightning signature of multiple storms. The development of this tool comes as a direct result of forecaster feedback from numerous assessments requesting a real-time display of the time series of total lightning. This tool also takes advantage of the new architecture available within the AWIPS II environment. SPoRT's lightning tracking tool has been tested in the Hazardous Weather Testbed (HWT) Spring Program and significant changes have been made based on the feedback. In addition to the updates in response to the HWT assessment, the lightning tracking tool may also be extended to incorporate other requested displays, such as the intra-cloud to cloud-to-ground ratio as well as incorporate the lightning jump algorithm.
Optical holography applications for the zero-g Atmospheric Cloud Physics Laboratory
NASA Technical Reports Server (NTRS)
Kurtz, R. L.
1974-01-01
A complete description of holography is provided, both for the time-dependent case of moving scene holography and for the time-independent case of stationary holography. Further, a specific holographic arrangement for application to the detection of particle size distribution in an atmospheric simulation cloud chamber. In this chamber particle growth rate is investigated; therefore, the proposed holographic system must capture continuous particle motion in real time. Such a system is described.
A Review on Real-Time 3D Ultrasound Imaging Technology
Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067
A Review on Real-Time 3D Ultrasound Imaging Technology.
Huang, Qinghua; Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515
Real-time synthetic vision cockpit display for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.
1999-07-01
Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.
Near-Real Time Cloud Retrievals from Operational and Research Meteorological Satellites
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Nguyen, Louis; Palilonda, Rabindra; Heck, Patrick W.; Spangenberg, Douglas A.; Doelling, David R.; Ayers, J. Kirk; Smith, William L., Jr.; Khaiyer, Mandana M.; Trepte, Qing Z.;
2008-01-01
A set of cloud retrieval algorithms developed for CERES and applied to MODIS data have been adapted to analyze other satellite imager data in near-real time. The cloud products, including single-layer cloud amount, top and base height, optical depth, phase, effective particle size, and liquid and ice water paths, are being retrieved from GOES- 10/11/12, MTSAT-1R, FY-2C, and Meteosat imager data as well as from MODIS. A comprehensive system to normalize the calibrations to MODIS has been implemented to maximize consistency in the products across platforms. Estimates of surface and top-of-atmosphere broadband radiative fluxes are also provided. Multilayered cloud properties are retrieved from GOES-12, Meteosat, and MODIS data. Native pixel resolution analyses are performed over selected domains, while reduced sampling is used for full-disk retrievals. Tools have been developed for matching the pixel-level results with instrumented surface sites and active sensor satellites. The calibrations, methods, examples of the products, and comparisons with the ICESat GLAS lidar are discussed. These products are currently being used for aircraft icing diagnoses, numerical weather modeling assimilation, and atmospheric radiation research and have potential for use in many other applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minnis, Patrick
2013-06-28
During the period, March 1997 – February 2006, the Principal Investigator and his research team co-authored 47 peer-reviewed papers and presented, at least, 138 papers at conferences, meetings, and workshops that were supported either in whole or in part by this agreement. We developed a state-of-the-art satellite cloud processing system that generates cloud properties over the Atmospheric Radiation (ARM) surface sites and surrounding domains in near-real time and outputs the results on the world wide web in image and digital formats. When the products are quality controlled, they are sent to the ARM archive for further dissemination. These products andmore » raw satellite images can be accessed at http://cloudsgate2.larc.nasa.gov/cgi-bin/site/showdoc?docid=4&cmd=field-experiment-homepage&exp=ARM and are used by many in the ARM science community. The algorithms used in this system to generate cloud properties were validated and improved by the research conducted under this agreement. The team supported, at least, 11 ARM-related or supported field experiments by providing near-real time satellite imagery, cloud products, model results, and interactive analyses for mission planning, execution, and post-experiment scientific analyses. Comparisons of cloud properties derived from satellite, aircraft, and surface measurements were used to evaluate uncertainties in the cloud properties. Multiple-angle satellite retrievals were used to determine the influence of cloud structural and microphysical properties on the exiting radiation field.« less
Abercrombie, Robert K; Sheldon, Frederick T; Ferragut, Erik M
2014-06-24
A system evaluates reliability, performance and/or safety by automatically assessing the targeted system's requirements. A cost metric quantifies the impact of failures as a function of failure cost per unit of time. The metrics or measurements may render real-time (or near real-time) outcomes by initiating active response against one or more high ranked threats. The system may support or may be executed in many domains including physical domains, cyber security domains, cyber-physical domains, infrastructure domains, etc. or any other domains that are subject to a threat or a loss.
NASA Technical Reports Server (NTRS)
Goodman, S. J.; Christian, H. J.; Boccippio, D. J.; Koshak, W. J.; Cecil, D. J.; Arnold, James E. (Technical Monitor)
2002-01-01
The ThOR mission uses a lightning mapping sensor in geostationary Earth orbit to provide continuous observations of thunderstorm activity over the Americas and nearby oceans. The link between lightning activity and cloud updrafts is the basis for total lightning observations indicating the evolving convective intensification and decay of storms. ThOR offers a national operational demonstration of the utility of real-time total lightning mapping for earlier and more reliable identification of potentially severe and hazardous storms. Regional pilot projects have already demonstrated that the dominance in-cloud lightning and increasing in-cloud lash rates are known to precede severe weather at the surface by tens of minutes. ThOR is currently planned for launch in 2005 on a commercial or research satellite. Real-time data will be provided to selected NWS Weather Forecast Offices and National Centers (EMC/AWC/SPC) for evaluation.
NASA Astrophysics Data System (ADS)
Feltz, Wayne; Griffin, Sarah; Velden, Christopher; Zipser, Ed; Cecil, Daniel; Braun, Scott
2017-04-01
The purpose of this presentation is to identify in-flight hazards to high-altitude aircraft, namely the Global Hawk. The Global Hawk was used during Septembers 2012-2016 as part of two NASA funded Hurricane Sentinel-3 field campaigns to over-fly hurricanes in the Atlantic Ocean. This talk identifies the cause of severe turbulence experienced over Hurricane Emily (2005) and how a combination of NOAA funded GOES-R algorithm derived cloud top heights/tropical overshooting tops using GOES-13/SEVIRI imager radiances, and lightning information are used to identify areas of potential turbulence for near real-time navigation decision support. Several examples will demonstrate how the Global Hawk pilots remotely received and used real-time satellite derived cloud and lightning detection information to keep the aircraft safely above clouds and avoid regions of potential turbulence.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
Matching rendered and real world images by digital image processing
NASA Astrophysics Data System (ADS)
Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume
2010-05-01
Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.
NASA Astrophysics Data System (ADS)
Vicente, Gilberto A.
An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.
NASA Astrophysics Data System (ADS)
Tramutoli, V.; Filizzola, C.; Marchese, F.; Paciello, R.; Pergola, N.; Sannazzaro, F.
2010-12-01
Volcanic ash clouds, besides to be an environmental issue, represent a serious problem for air traffic and an important economic threat for aviation companies. During the recent volcanic crisis due to the April-May 2010 eruption of Eyjafjöll (Iceland), ash clouds became a real problem for common citizens as well: during the first days of the eruption thousands of flights were cancelled disrupting hundred of thousands of passengers. Satellite remote sensing confirmed to be a crucial tool for monitoring this kind of events, spreading for thousands of kilometres with a very rapid space-time dynamics. Especially weather satellites, thanks to their high temporal resolution, may furnish a fundamental contribution, providing frequently updated information. However, in this particular case ash cloud was accompanied by a sudden and significant emission of water vapour, due to the ice melting of Eyjafjallajökull glacier, making satellite ash detection and discrimination very hard, especially in the first few days of the eruption, exactly when accurate information were mostly required in order to support emergency management. Among the satellite-based techniques for near real-time detection and tracking of ash clouds, the RST (Robust Satellite Technique) approach, formerly named RAT - Robust AVHRR Technique, has been long since proposed, demonstrating high performances both in terms of reliability and sensitivity. In this paper, results achieved by using RST-based detection schemes, applied during the Eyjafjöll eruption were presented. MSG-SEVIRI (Meteosat Second Generation - Spinning Enhanced and Visible Infrared Imager) records, with a temporal sampling of 15 minutes, were used applying a standard as well as an advanced RST configuration, which includes the use of SO2 absorption band together with TIR and MIR channels. Main outcomes, limits and possible future improvements were also discussed.
Improving Patient Safety in Hospitals through Usage of Cloud Supported Video Surveillance.
Dašić, Predrag; Dašić, Jovan; Crvenković, Bojan
2017-04-15
Patient safety in hospitals is of equal importance as providing treatments and urgent healthcare. With the development of Cloud technologies and Big Data analytics, it is possible to employ VSaaS technology virtually anywhere, for any given security purpose. For the listed benefits, in this paper, we give an overview of the existing cloud surveillance technologies which can be implemented for improving patient safety. Modern VSaaS systems provide higher elasticity and project scalability in dealing with real-time information processing. Modern surveillance technologies can prove to be an effective tool for prevention of patient falls, undesired movement and tempering with attached life supporting devices. Given a large number of patients who require constant supervision, a cloud-based monitoring system can dramatically reduce the occurring costs. It provides continuous real-time monitoring, increased overall security and safety, improved staff productivity, prevention of dishonest claims and long-term digital archiving. Patient safety is a growing issue which can be improved with the usage of high-end centralised surveillance systems allowing the staff to focus more on treating health issues rather that keeping a watchful eye on potential incidents.
Real-Time High-Dynamic Range Texture Mapping
2001-01-01
the renderings produced by radiosity and global illumination algorithms. As a particular example, Greg Ward’s RADIANCE synthetic imaging system [32...in soft- ware only. [26] presented a technique for performing Ward’s tone reproduction algo- rithm interactively to visualize radiosity solutions
Progress towards MODIS and VIIRS Cloud Fraction Data Record Continuity
NASA Astrophysics Data System (ADS)
Ackerman, S. A.; Frey, R.; Holz, R.; Platnick, S. E.; Heidinger, A. K.
2016-12-01
Satellite-derived clear-sky vs. cloudy-sky discrimination at the pixel scale is an important input parameter used in many real-time applications. Cloud fractions, resulting from integrating over time and space, are also critical to the study of recent decadal climate changes. The NASA NPOESS Preparatory Project (NPP) has funded a science team to develop and study the ability to make continuous climate records from MODIS (2000-2020) and VIIRS (2012-2030). The MODAWG project, led by Dr. Steve Platnick of NASA/GSFC, combines elements of the MODIS processing system and the NOAA Algorithm Working Group (AWG) to achieve this goal. This presentation will focus on the cloud masking aspects of MODAWG, derived primarily from the MODIS cloud mask (MOD35). Challenges to continuity of cloud detection due to differences in instrument characteristics will be discussed. Cloud mask results from use of the same (continuity) algorithm will be shown for both MODIS and VIIRS, including comparisons to collocated CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) cloud data.
Reprint of "How do components of real cloud water affect aqueous pyruvate oxidation?"
NASA Astrophysics Data System (ADS)
Boris, Alexandra J.; Desyaterik, Yury; Collett, Jeffrey L.
2015-01-01
Chemical oxidation of dissolved volatile or semi-volatile organic compounds within fog and cloud droplets in the atmosphere could be a major pathway for secondary organic aerosol (SOA) formation. This proposed pathway consists of: (1) dissolution of organic chemicals from the gas phase into a droplet; (2) reaction with an aqueous phase oxidant to yield low volatility products; and (3) formation of particle phase organic matter as the droplet evaporates. The common approach to simulating aqueous SOA (aqSOA) reactions is photo-oxidation of laboratory standards in pure water. Reactions leading to aqSOA formation should be studied within real cloud and fog water to determine whether additional competing processes might alter apparent rates of reaction as indicated by rates of reactant loss or product formation. To evaluate and identify the origin of any cloud water matrix effects on one example of observed aqSOA production, pyruvate oxidation experiments simulating aqSOA formation were monitored within pure water, real cloud water samples, and an aqueous solution of inorganic salts. Two analysis methods were used: online electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR-ToF-MS), and offline anion exchange chromatography (IC) with quantitative conductivity and qualitative ESI-HR-ToF-MS detection. The apparent rate of oxidation of pyruvate was slowed in cloud water matrices: overall measured degradation rates of pyruvate were lower than in pure water. This can be at least partially accounted for by the observed formation of pyruvate from reactions of other cloud water components. Organic constituents of cloud water also compete for oxidants and/or UV light, contributing to the observed slowed degradation rates of pyruvate. The oxidation of pyruvate was not significantly affected by the presence of inorganic anions (nitrate and sulfate) at cloud-relevant concentrations. Future bulk studies of aqSOA formation reactions using simplified simulated cloud solutions and model estimates of generated aqSOA mass should take into account possible generation of, or competition for, oxidant molecules by organic components found in the complex matrices typically associated with real atmospheric water droplets. Additionally, it is likely that some components of real atmospheric waters have not yet been identified as aqSOA precursors, but could be distinguished through further simplified bulk oxidations of known atmospheric water components.
Ash Emissions and Risk Management in the Pacific Ocean
NASA Astrophysics Data System (ADS)
Steensen, T. S.; Webley, P. W.; Stuefer, M.
2012-12-01
Located in the 'Ring of Fire', regions and communities around the Pacific Ocean often face volcanic eruptions and subsequent ash emissions. Volcanic ash clouds pose a significant risk to aviation, especially in the highly-frequented flight corridors around active volcano zones like Indonesia or Eastern Russia and the Alaskan Aleutian Islands. To mitigate and manage such events, a detailed quantitative analysis using a range of scientific measurements, including satellite data and Volcanic Ash Transport and Dispersion (VATD) model results, needs to be conducted in real-time. For the case study of the Sarychev Peak eruption in Russia's Kurile Islands during 2009, we compare ash loading and dispersion from Weather Research and Forecast model with online Chemistry (WRF-Chem) results with satellite data of the eruption. These parameters are needed for the real-time management of volcanic crises to outline no-fly zones and to predict the areas that the ash is most likely to reach in the near future. In the early stages after the eruption, an international group with representatives from the Kamchatkan and Sachalin Volcanic Eruption Response Teams (KVERT, SVERT), the National Aeronautics and Space Administration (NASA), and the Alaska Volcano Observatory (AVO) published early research on the geological and geophysical characteristics of the eruption and the behavior of the resulting ash clouds. The study presented here is a follow-up project aimed to implement VATD model results and satellite data retrospectively to demonstrate the possibilities to develop this approach in real-time for future eruptions. Our research finds that, although meteorological cloud coverage is high in those geographical regions and, consequently, these clouds can cover most of the ash clouds and as such prevent satellites from detecting it, both approaches compare well and supplement each other to reduce the risk of volcanic eruptions. We carry out spatial extent and absolute quantitative comparisons and analyze the sensitivity of model inputs, such as eruption rate and vertical particle size distributions. Our analysis shows that comparisons between real-time satellite observations and VATD model simulations is a complex and difficult process and we present several methods that could be used to reduce the hazards and be useful in any risk assessments.
Multiview 3D sensing and analysis for high quality point cloud reconstruction
NASA Astrophysics Data System (ADS)
Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard
2018-04-01
Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.
Integration of drug dosing data with physiological data streams using a cloud computing paradigm.
Bressan, Nadja; James, Andrew; McGregor, Carolyn
2013-01-01
Many drugs are used during the provision of intensive care for the preterm newborn infant. Recommendations for drug dosing in newborns depend upon data from population based pharmacokinetic research. There is a need to be able to modify drug dosing in response to the preterm infant's response to the standard dosing recommendations. The real-time integration of physiological data with drug dosing data would facilitate individualised drug dosing for these immature infants. This paper proposes the use of a novel computational framework that employs real-time, temporal data analysis for this task. Deployment of the framework within the cloud computing paradigm will enable widespread distribution of individualized drug dosing for newborn infants.
Laboratory simulations of cumulus cloud flows explain the entrainment anomaly
NASA Astrophysics Data System (ADS)
Narasimha, Roddam; Diwan, Sourabh S.; Subrahmanyam, Duvvuri; Sreenivas, K. R.; Bhat, G. S.
2010-11-01
In the present laboratory experiments, cumulus cloud flows are simulated by starting plumes and jets subjected to off-source heat addition in amounts that are dynamically similar to latent heat release due to condensation in real clouds. The setup permits incorporation of features like atmospheric inversion layers and the active control of off-source heat addition. Herein we report, for the first time, simulation of five different cumulus cloud types (and many shapes), including three genera and three species (WMO Atlas 1987), which show striking resemblance to real clouds. It is known that the rate of entrainment in cumulus cloud flows is much less than that in classical plumes - the main reason for the failure of early entrainment models. Some of the previous studies on steady-state jets and plumes (done in a similar setup) have attributed this anomaly to the disruption of the large-scale turbulent structures upon the addition of off-source heat. We present estimates of entrainment coefficients from these measurements which show a qualitatively consistent variation with height. We propose that this explains the observed entrainment anomaly in cumulus clouds; further experiments are planned to address this question in the context of starting jets and plumes.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
A data-management system using sensor technology and wireless devices for port security
NASA Astrophysics Data System (ADS)
Saldaña, Manuel; Rivera, Javier; Oyola, Jose; Manian, Vidya
2014-05-01
Sensor technologies such as infrared sensors and hyperspectral imaging, video camera surveillance are proven to be viable in port security. Drawing from sources such as infrared sensor data, digital camera images and processed hyperspectral images, this article explores the implementation of a real-time data delivery system. In an effort to improve the manner in which anomaly detection data is delivered to interested parties in port security, this system explores how a client-server architecture can provide protected access to data, reports, and device status. Sensor data and hyperspectral image data will be kept in a monitored directory, where the system will link it to existing users in the database. Since this system will render processed hyperspectral images that are dynamically added to the server - which often occupy a large amount of space - the resolution of these images is trimmed down to around 1024×768 pixels. Changes that occur in any image or data modification that originates from any sensor will trigger a message to all users that have a relation with the aforementioned. These messages will be sent to the corresponding users through automatic email generation and through a push notification using Google Cloud Messaging for Android. Moreover, this paper presents the complete architecture for data reception from the sensors, processing, storage and discusses how users of this system such as port security personnel can use benefit from the use of this service to receive secure real-time notifications if their designated sensors have detected anomalies and/or have remote access to results from processed hyperspectral imagery relevant to their assigned posts.
Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon
2017-09-07
Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible.
Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon
2017-01-01
Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible. PMID:28880220
Retrieval of Ice Cloud Properties Using Variable Phase Functions
NASA Astrophysics Data System (ADS)
Heck, Patrick W.; Minnis, Patrick; Yang, Ping; Chang, Fu-Lung; Palikonda, Rabindra; Arduini, Robert F.; Sun-Mack, Sunny
2009-03-01
An enhancement to NASA Langley's Visible Infrared Solar-infrared Split-window Technique (VISST) is developed to identify and account for situations when errors are induced by using smooth ice crystals. The retrieval scheme incorporates new ice cloud phase functions that utilize hexagonal crystals with roughened surfaces. In some situations, cloud optical depths are reduced, hence, cloud height is increased. Cloud effective particle size also changes with the roughened ice crystal models which results in varied effects on the calculation of ice water path. Once validated and expanded, the new approach will be integrated in the CERES MODIS algorithm and real-time retrievals at Langley.
The character of drift spreading of artificial plasma clouds in the middle-latitude ionosphere
NASA Astrophysics Data System (ADS)
Blaunstein, N.
1996-02-01
Nonlinear equations describing the evolution of plasma clouds with real initial sizes, along and across the geomagnetic field B, which drift in the ionosphere in the presence of an ambient electric field and a neutral wind have been solved and analysed. An ionospheric model close to the real conditions of the middle-latitude ionosphere is introduced, taking into account the altitude dependence of the transport coefficients and background ionospheric plasma. The striation of the initial plasma cloud into a cluster of plasmoids, stretched along the field B, is obtained. The process of dispersive splitting of the initial plasma cloud can be understood in terms of gradient drift instability (GDI) as a most probable striation mechanism. The dependence of the characteristic time of dispersive splitting on the value of the ambient electric field, the initial plasma disturbance in the cloud and its initial sizes was investigated. The stretching criterion, necessary for the plasma cloud's striation is obtained. The possibility of the drift stabilization effect arising from azimuthal drift velocity shear, obtained by Drake et al. [1988], is examined for various parameters of the barium cloud and the background ionospheric conditions. A comparison with experimental data on the evolution of barium clouds in rocket experiments at the height of the lower ionosphere is made.
Measurement of Thunderstorm Cloud-Top Parameters Using High-Frequency Satellite Imagery
1978-01-01
short wave was present well to the south of this system approximately 2000 ka west of Baja California. Two distinct flow patterns were present, one...view can be observed in near real time whereas radar observations, although excellent for local purposes, involve substantial errors when composited...on a large scale. The time delay in such large scale compositing is critical when attempting to monitor convective cloud systems for a potential
Real-time stereographic display of volumetric datasets in radiology
NASA Astrophysics Data System (ADS)
Wang, Xiao Hui; Maitz, Glenn S.; Leader, J. K.; Good, Walter F.
2006-02-01
A workstation for testing the efficacy of stereographic displays for applications in radiology has been developed, and is currently being tested on lung CT exams acquired for lung cancer screening. The system exploits pre-staged rendering to achieve real-time dynamic display of slabs, where slab thickness, axial position, rendering method, brightness and contrast are interactively controlled by viewers. Stereo presentation is achieved by use of either frame-swapping images or cross-polarizing images. The system enables viewers to toggle between alternative renderings such as one using distance-weighted ray casting by maximum-intensity-projection, which is optimal for detection of small features in many cases, and ray casting by distance-weighted averaging, for characterizing features once detected. A reporting mechanism is provided which allows viewers to use a stereo cursor to measure and mark the 3D locations of specific features of interest, after which a pop-up dialog box appears for entering findings. The system's impact on performance is being tested on chest CT exams for lung cancer screening. Radiologists' subjective assessments have been solicited for other kinds of 3D exams (e.g., breast MRI) and their responses have been positive. Objective estimates of changes in performance and efficiency, however, must await the conclusion of our study.
A cloud-resolving model study of aerosol-cloud correlation in a pristine maritime environment
NASA Astrophysics Data System (ADS)
Nishant, Nidhi; Sherwood, Steven C.
2017-06-01
In convective clouds, satellite-observed deepening or increased amount of clouds with increasing aerosol concentration has been reported and is sometimes interpreted as aerosol-induced invigoration of the clouds. However, such correlations can be affected by meteorological factors that affect both aerosol and clouds, as well as observational issues. In this study, we examine the behavior in a 660 × 660 km2 region of the South Pacific during June 2007, previously found by Koren et al. (2014) to show strong correlation between cloud fraction, cloud top pressure, and aerosols, using a cloud-resolving model with meteorological boundary conditions specified from a reanalysis. The model assumes constant aerosol loading, yet reproduces vigorous clouds at times of high real-world aerosol concentrations. Days with high- and low-aerosol loading exhibit deep-convective and shallow clouds, respectively, in both observations and the simulation. Synoptic analysis shows that vigorous clouds occur at times of strong surface troughs, which are associated with high winds and advection of boundary layer air from the Southern Ocean where sea-salt aerosol is abundant, thus accounting for the high correlation. Our model results show that aerosol-cloud relationships can be explained by coexisting but independent wind-aerosol and wind-cloud relationships and that no cloud condensation nuclei effect is required.
Particle nonuniformity effects on particle cloud flames in low gravity
NASA Technical Reports Server (NTRS)
Berlad, A. L.; Tangirala, V.; Seshadri, K.; Facca, L. T.; Ogrin, J.; Ross, H.
1991-01-01
Experimental and analytical studies of particle cloud combustion at reduced gravity reveal the substantial roles that particle cloud nonuniformities may play in particle cloud combustion. Macroscopically uniform, quiescent particle cloud systems (at very low gravitational levels and above) sustain processes which can render them nonuniform on both macroscopic and microscopic scales. It is found that a given macroscopically uniform, quiescent particle cloud flame system can display a range of microscopically nonuniform features which lead to a range of combustion features. Microscopically nonuniform particle cloud distributions are difficult experimentally to detect and characterize. A uniformly distributed lycopodium cloud of particle-enriched microscopic nonuniformities in reduced gravity displays a range of burning velocities for any given overall stoichiometry. The range of observed and calculated burning velocities corresponds to the range of particle enriched concentrations within a characteristic microscopic nonuniformity. Sedimentation effects (even in reduced gravity) are also examined.
A spatially augmented reality sketching interface for architectural daylighting design.
Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara
2011-01-01
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society
Adapting CALIPSO Climate Measurements for Near Real Time Analyses and Forecasting
NASA Technical Reports Server (NTRS)
Vaughan, Mark A.; Trepte, Charles R.; Winker, David M.; Avery, Melody A.; Campbell, James; Hoff, Ray; Young, Stuart; Getzewich, Brian J.; Tackett, Jason L.; Kar, Jayanta
2011-01-01
The Cloud-Aerosol Lidar and Infrared Pathfinder satellite Observations (CALIPSO) mission was originally conceived and designed as a climate measurements mission, with considerable latency between data acquisition and the release of the level 1 and level 2 data products. However, the unique nature of the CALIPSO lidar backscatter profiles quickly led to the qualitative use of CALIPSO?s near real time (i.e., ? expedited?) lidar data imagery in several different forecasting applications. To enable quantitative use of their near real time analyses, the CALIPSO project recently expanded their expedited data catalog to include all of the standard level 1 and level 2 lidar data products. Also included is a new cloud cleared level 1.5 profile product developed for use by operational forecast centers for verification of aerosol predictions. This paper describes the architecture and content of the CALIPSO expedited data products. The fidelity and accuracy of the expedited products are assessed via comparisons to the standard CALIPSO data products.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
An Overview of Cloud Implementation in the Manufacturing Process Life Cycle
NASA Astrophysics Data System (ADS)
Kassim, Noordiana; Yusof, Yusri; Hakim Mohamad, Mahmod Abd; Omar, Abdul Halim; Roslan, Rosfuzah; Aryanie Bahrudin, Ida; Ali, Mohd Hatta Mohamed
2017-08-01
The advancement of information and communication technology (ICT) has changed the structure and functions of various sectors and it has also started to play a significant role in modern manufacturing in terms of computerized machining and cloud manufacturing. It is important for industries to keep up with the current trend of ICT for them to be able survive and be competitive. Cloud manufacturing is an approach that wanted to realize a real-world manufacturing processes that will apply the basic concept from the field of Cloud computing to the manufacturing domain called Cloud-based manufacturing (CBM) or cloud manufacturing (CM). Cloud manufacturing has been recognized as a new paradigm for manufacturing businesses. In cloud manufacturing, manufacturing companies need to support flexible and scalable business processes in the shop floor as well as the software itself. This paper provides an insight or overview on the implementation of cloud manufacturing in the modern manufacturing processes and at the same times analyses the requirements needed regarding process enactment for Cloud manufacturing and at the same time proposing a STEP-NC concept that can function as a tool to support the cloud manufacturing concept.
A Nationwide Experimental Multi-Gigabit Network
2003-03-01
television and cinema , and to real- time interactive teleconferencing. There is another variable which affects this happy growth in network bandwidth and...render large scientific data sets with interactive frame rates on the desktop or in an immersive virtual reality ( VR ) environment. In our design, we
Virtual sensor models for real-time applications
NASA Astrophysics Data System (ADS)
Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin
2016-09-01
Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.
NASA Astrophysics Data System (ADS)
Alby, E.; Elter, R.; Ripoche, C.; Quere, N.; de Strasbourg, INSA
2013-07-01
In a geopolitical very complex context as the Gaza Strip it has to be dealt with an enhancement of an archaeological site. This site is the monastery of St. Hilarion. To enable a cultural appropriation of a place with several identified phases of occupation must undertake extensive archaeological excavation. Excavate in this geographical area is to implement emergency excavations, so the aim of such a project can be questioned for each mission. Real estate pressure is also a motivating setting the documentation because the large population density does not allow systematic studies of underground before construction projects. This is also during the construction of a road that the site was discovered. Site dimensions are 150 m by 80 m. It is located on a sand dune, 300 m from the sea. To implement the survey, four different levels of detail have been defined for terrestrial photogrammetry. The first level elements are similar to objects, capitals, fragment of columns, tiles for example. Modeling of small objects requires the acquisition of very dense point clouds (density: 1 point / 1 mm on average). The object must then be a maximum area of the sensor of the camera, while retaining in the field of view a reference pattern for the scaling of the point cloud generated. The pictures are taken at a short distance from the object, using the images at full resolution. The main obstacle to the modeling of objects is the presence of noise partly due to the studied materials (sand, smooth rock), which do not favor the detection of points of interest quality. Pretreatments of the cloud will be achieved meticulously since the ouster of points on a surface of a small object results in the formation of a hole with a lack of information, useful to resulting mesh. Level 2 focuses on the stratigraphic units such as mosaics. The monastery of St. Hilarion identifies thirteen floors of which has been documented years ago by silver photographs, scanned later. Modeling of pavements is to obtain a three-dimensional model of the mosaic in particular to analyze the subsidence, which it may be subjected. The dense point cloud can go beyond by including the geometric shapes of the pavement. The calculation mesh using high-density point cloud colorization allows cloud sufficient to final rendering. Levels 3 and 4 will allow the survey and representation of loci and sectors. Their modeling can be done by colored mesh or textured by a generic pattern but also by geometric primitives. This method requires the segmentation simple geometrical elements and creates a surface geometry by analysis of the sample points. Statistical tools allow the extraction plans meet the requirements of the operator can monitor quantitatively the quality of the final rendering. Each level has constraints on the accuracy of survey and types of representation especially from the point clouds, which are detailed in the complete article.
Preparation of Ultracold Atom Clouds at the Shot Noise Level.
Gajdacz, M; Hilliard, A J; Kristensen, M A; Pedersen, P L; Klempt, C; Arlt, J J; Sherson, J F
2016-08-12
We prepare number stabilized ultracold atom clouds through the real-time analysis of nondestructive images and the application of feedback. In our experiments, the atom number N∼10^{6} is determined by high precision Faraday imaging with uncertainty ΔN below the shot noise level, i.e., ΔN
Computation offloading for real-time health-monitoring devices.
Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid
2016-08-01
Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.
Volcanic Ash and SO2 Monitoring Using Suomi NPP Direct Broadcast OMPS Data
NASA Astrophysics Data System (ADS)
Seftor, C. J.; Krotkov, N. A.; McPeters, R. D.; Li, J. Y.; Brentzel, K. W.; Habib, S.; Hassinen, S.; Heinrichs, T. A.; Schneider, D. J.
2014-12-01
NASA's Suomi NPP Ozone Science Team, in conjunction with Goddard Space Flight Center's (GSFC's) Direct Readout Laboratory, developed the capability of processing, in real-time, direct readout (DR) data from the Ozone Mapping and Profiler Suite (OMPS) to perform SO2 and Aerosol Index (AI) retrievals. The ability to retrieve this information from real-time processing of DR data was originally developed for the Ozone Monitoring Instrument (OMI) onboard the Aura spacecraft and is used by Volcano Observatories and Volcanic Ash Advisory Centers (VAACs) charged with mapping ash clouds from volcanic eruptions and providing predictions/forecasts about where the ash will go. The resulting real-time SO2 and AI products help to mitigate the effects of eruptions such as the ones from Eyjafjallajokull in Iceland and Puyehue-Cordón Caulle in Chile, which cause massive disruptions to airline flight routes for weeks as airlines struggle to avoid ash clouds that could cause engine failure, deeply pitted windshields impossible to see through, and other catastrophic events. We will discuss the implementation of real-time processing of OMPS DR data by both the Geographic Information Network of Alaska (GINA) and the Finnish Meteorological Institute (FMI), which provide real-time coverage over some of the most congested airspace and over many of the most active volcanoes in the world, and show examples of OMPS DR processing results from recent volcanic eruptions.
Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region
NASA Technical Reports Server (NTRS)
Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.
2004-01-01
Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.
In-Situ Three-Dimensional Shape Rendering from Strain Values Obtained Through Optical Fiber Sensors
NASA Technical Reports Server (NTRS)
Chan, Hon Man (Inventor); Parker, Jr., Allen R. (Inventor)
2015-01-01
A method and system for rendering the shape of a multi-core optical fiber or multi-fiber bundle in three-dimensional space in real time based on measured fiber strain data. Three optical fiber cores arc arranged in parallel at 120.degree. intervals about a central axis. A series of longitudinally co-located strain sensor triplets, typically fiber Bragg gratings, are positioned along the length of each fiber at known intervals. A tunable laser interrogates the sensors to detect strain on the fiber cores. Software determines the strain magnitude (.DELTA.L/L) for each fiber at a given triplet, but then applies beam theory to calculate curvature, beading angle and torsion of the fiber bundle, and from there it determines the shape of the fiber in s Cartesian coordinate system by solving a series of ordinary differential equations expanded from the Frenet-Serrat equations. This approach eliminates the need for computationally time-intensive curve-tilting and allows the three-dimensional shape of the optical fiber assembly to be displayed in real-time.
Real Time Volcanic Cloud Products and Predictions for Aviation Alerts
NASA Technical Reports Server (NTRS)
Krotkov, Nickolay A.; Habib, Shahid; da Silva, Arlindo; Hughes, Eric; Yang, Kai; Brentzel, Kelvin; Seftor, Colin; Li, Jason Y.; Schneider, David; Guffanti, Marianne;
2014-01-01
Volcanic eruptions can inject significant amounts of sulfur dioxide (SO2) and volcanic ash into the atmosphere, posing a substantial risk to aviation safety. Ingesting near-real time and Direct Readout satellite volcanic cloud data is vital for improving reliability of volcanic ash forecasts and mitigating the effects of volcanic eruptions on aviation and the economy. NASA volcanic products from the Ozone Monitoring Insrument (OMI) aboard the Aura satellite have been incorporated into Decision Support Systems of many operational agencies. With the Aura mission approaching its 10th anniversary, there is an urgent need to replace OMI data with those from the next generation operational NASA/NOAA Suomi National Polar Partnership (SNPP) satellite. The data provided from these instruments are being incorporated into forecasting models to provide quantitative ash forecasts for air traffic management. This study demonstrates the feasibility of the volcanic near-real time and Direct Readout data products from the new Ozone Monitoring and Profiling Suite (OMPS) ultraviolet sensor onboard SNPP for monitoring and forecasting volcanic clouds. The transition of NASA data production to our operational partners is outlined. Satellite observations are used to constrain volcanic cloud simulations and improve estimates of eruption parameters, resulting in more accurate forecasts. This is demonstrated for the 2012 eruption of Copahue. Volcanic eruptions are modeled using the Goddard Earth Observing System, Version 5 (GEOS-5) and the Goddard Chemistry Aerosol and Radiation Transport (GOCART) model. A hindcast of the disruptive eruption from Iceland's Eyjafjallajokull is used to estimate aviation re-routing costs using Metron Aviation's ATM Tools.
NASA Astrophysics Data System (ADS)
Webley, P.; Dehn, J.; Dean, K. G.; Macfarlane, S.
2010-12-01
Volcanic eruptions are a global hazard, affecting local infrastructure, impacting airports and hindering the aviation community, as seen in Europe during Spring 2010 from the Eyjafjallajokull eruption in Iceland. Here, we show how remote sensing data is used through web-based interfaces for monitoring volcanic activity, both ground based thermal signals and airborne ash clouds. These ‘web tools’, http://avo.images.alaska.edu/, provide timely availability of polar orbiting and geostationary data from US National Aeronautics and Space Administration, National Oceanic and Atmosphere Administration and Japanese Meteorological Agency satellites for the North Pacific (NOPAC) region. This data is used operationally by the Alaska Volcano Observatory (AVO) for monitoring volcanic activity, especially at remote volcanoes and generates ‘alarms’ of any detected volcanic activity and ash clouds. The webtools allow the remote sensing team of AVO to easily perform their twice daily monitoring shifts. The web tools also assist the National Weather Service, Alaska and Kamchatkan Volcanic Emergency Response Team, Russia in their operational duties. Users are able to detect ash clouds, measure the distance from the source, area and signal strength. Within the web tools, there are 40 x 40 km datasets centered on each volcano and a searchable database of all acquired data from 1993 until present with the ability to produce time series data per volcano. Additionally, a data center illustrates the acquired data across the NOPAC within the last 48 hours, http://avo.images.alaska.edu/tools/datacenter/. We will illustrate new visualization tools allowing users to display the satellite imagery within Google Earth/Maps, and ArcGIS Explorer both as static maps and time-animated imagery. We will show these tools in real-time as well as examples of past large volcanic eruptions. In the future, we will develop the tools to produce real-time ash retrievals, run volcanic ash dispersion models from detected ash clouds and develop the browser interfaces to display other remote sensing datasets, such as volcanic sulfur dioxide detection.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.
2017-12-01
Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.
Pacanowski, Romain; Salazar Celis, Oliver; Schlick, Christophe; Granier, Xavier; Poulin, Pierre; Cuyt, Annie
2012-11-01
Over the last two decades, much effort has been devoted to accurately measuring Bidirectional Reflectance Distribution Functions (BRDFs) of real-world materials and to use efficiently the resulting data for rendering. Because of their large size, it is difficult to use directly measured BRDFs for real-time applications, and fitting the most sophisticated analytical BRDF models is still a complex task. In this paper, we introduce Rational BRDF, a general-purpose and efficient representation for arbitrary BRDFs, based on Rational Functions (RFs). Using an adapted parametrization, we demonstrate how Rational BRDFs offer 1) a more compact and efficient representation using low-degree RFs, 2) an accurate fitting of measured materials with guaranteed control of the residual error, and 3) efficient importance sampling by applying the same fitting process to determine the inverse of the Cumulative Distribution Function (CDF) generated from the BRDF for use in Monte-Carlo rendering.
Real-Time On-Board Processing Validation of MSPI Ground Camera Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.
2010-01-01
The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.
CATS Near Real Time Data Products: Applications for Assimilation into the NASA GEOS-5 AGCM
NASA Astrophysics Data System (ADS)
Nowottnick, E. P.; Hlavka, D. L.; Yorks, J. E.; da Silva, A. M., Jr.; McGill, M. J.; Palm, S. P.; Selmer, P. A.; Pauly, R.; Ozog, S.
2017-12-01
Since February 2015, the NASA Cloud-Aerosol Transport System (CATS) backscatter lidar has been operating on the International Space Station (ISS) as a technology demonstration for future Earth Science Missions, providing vertical measurements of cloud and aerosols properties. Owing to its location on the ISS, a cornerstone technology demonstration of CATS is the capability to acquire, process, and disseminate near-real time (NRT) data within 6 hours of observation time. Here, we present CATS NRT data products and outline improved CATS algorithms used to discriminate clouds from aerosols, and subsequently identify cloud and aerosol type. CATS NRT data has several applications, including providing notification of hazardous events for air traffic control and air quality advisories, field campaign flight planning, as well as for constraining cloud and aerosol distributions in via data assimilation in aerosol transport models. Recent developments in aerosol data assimilation techniques have permitted the assimilation of aerosol optical thickness (AOT), a 2-dimensional column integrated quantity that is reflective of the simulated aerosol loading in aerosol transport models. While this capability has greatly improved simulated AOT forecasts, the vertical position, a key control on aerosol transport, is often not impacted when 2-D AOT is assimilated. Here, we also present preliminary efforts to assimilate CATS observations into the NASA Goddard Earth Observing System version 5 (GEOS-5) atmospheric general circulation model and assimilation system using a 1-D Variational (1-D VAR) approach, demonstrating the utility of CATS for future Earth Science Missions.
Technique for analyzing human respiratory process
NASA Technical Reports Server (NTRS)
Liu, F. F.
1970-01-01
Electronic system /MIRACLE 2/ places frequency and gas flow rate of the respiratory process within a common frame of reference to render them comparable and compatible with ''real clock time.'' Numerous measurements are accomplished accurately on a strict one-minute half-minute, breath-by-breath, or other period basis.
Cybertherapy 2005: A Decade of VR
2005-07-01
headphones, which delivered a soundscape updated in real time according to their movement in the virtual town. In the third condition, they were asked to...navigate in a soundscape in the absence of vision (A). The sounds were produced through tracked binaural rendering (HRTF) and were dependent upon the
Data-Driven Geospatial Visual Analytics for Real-Time Urban Flooding Decision Support
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D.; Rodriguez, A.; Marini, L.; Kooper, R.; Myers, J.; Wu, X.; Minsker, B. S.
2009-12-01
Urban flooding is responsible for the loss of life and property as well as the release of pathogens and other pollutants into the environment. Previous studies have shown that spatial distribution of intense rainfall significantly impacts the triggering and behavior of urban flooding. However, no general purpose tools yet exist for deriving rainfall data and rendering them in real-time at the resolution of hydrologic units used for analyzing urban flooding. This paper presents a new visual analytics system that derives and renders rainfall data from the NEXRAD weather radar system at the sewershed (i.e. urban hydrologic unit) scale in real-time for a Chicago stormwater management project. We introduce a lightweight Web 2.0 approach which takes advantages of scientific workflow management and publishing capabilities developed at NCSA (National Center for Supercomputing Applications), streaming data-aware semantic content management repository, web-based Google Earth/Map and time-aware KML (Keyhole Markup Language). A collection of polygon-based virtual sensors is created from the NEXRAD Level II data using spatial, temporal and thematic transformations at the sewershed level in order to produce persistent virtual rainfall data sources for the animation. Animated color-coded rainfall map in the sewershed can be played in real-time as a movie using time-aware KML inside the web browser-based Google Earth for visually analyzing the spatiotemporal patterns of the rainfall intensity in the sewershed. Such system provides valuable information for situational awareness and improved decision support during extreme storm events in an urban area. Our further work includes incorporating additional data (such as basement flooding events data) or physics-based predictive models that can be used for more integrated data-driven decision support.
Improving Patient Safety in Hospitals through Usage of Cloud Supported Video Surveillance
Dašić, Predrag; Dašić, Jovan; Crvenković, Bojan
2017-01-01
BACKGROUND: Patient safety in hospitals is of equal importance as providing treatments and urgent healthcare. With the development of Cloud technologies and Big Data analytics, it is possible to employ VSaaS technology virtually anywhere, for any given security purpose. AIM: For the listed benefits, in this paper, we give an overview of the existing cloud surveillance technologies which can be implemented for improving patient safety. MATERIAL AND METHODS: Modern VSaaS systems provide higher elasticity and project scalability in dealing with real-time information processing. Modern surveillance technologies can prove to be an effective tool for prevention of patient falls, undesired movement and tempering with attached life supporting devices. Given a large number of patients who require constant supervision, a cloud-based monitoring system can dramatically reduce the occurring costs. It provides continuous real-time monitoring, increased overall security and safety, improved staff productivity, prevention of dishonest claims and long-term digital archiving. CONCLUSION: Patient safety is a growing issue which can be improved with the usage of high-end centralised surveillance systems allowing the staff to focus more on treating health issues rather that keeping a watchful eye on potential incidents. PMID:28507610
NASA Astrophysics Data System (ADS)
Zhang, Kang
2011-12-01
In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.
NASA Astrophysics Data System (ADS)
Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.
2018-02-01
This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.
NASA Astrophysics Data System (ADS)
Al-Mashat, H.; Kristensen, L.; Sultana, C. M.; Prather, K. A.
2016-12-01
The ability to distinguish types of particles present within a cloud is important for determining accurate inputs to climate models. The chemical composition of particles within cloud liquid droplets and ice crystals can have a significant impact on the timing, location, and amount of precipitation that falls. Precipitation efficiency is increased by the presence of ice crystals in clouds, and both mineral dust and biological aerosols have been shown to be effective ice nucleating particles (INPs) in the atmosphere. A current challenge in aerosol science is distinguishing mineral dust and biological material in the analysis of real-time, ambient, single-particle mass spectral data. Single-particle mass spectrometers are capable of measuring the size-resolved chemical composition of individual atmospheric particles. However, there is no consistent analytical method for distinguishing dust and biological aerosols. Sampling and characterization of control samples (i.e. of known identity) of mineral dust and bacteria were performed by the Aerosol Time-of-Flight Mass Spectrometer (ATOFMS) as part of the Fifth Ice Nucleation (FIN01) Workshop at the Aerosol Interaction and Dynamics in the Atmosphere (AIDA) facility in Karlsruhe, Germany. Using data collected by the ATOFMS of control samples, a new metric has been developed to classify single particles as dust or biological independent of spectral cluster analysis. This method, involving the use of a ratio of mass spectral peak areas for organic nitrogen and silicates, is easily reproducible and does not rely on extensive knowledge of particle chemistry or the ionization characteristics of mass spectrometers. This represents a step toward rapidly distinguishing particle types responsible for ice nucleation activity during real-time sampling in clouds. The ability to distinguish types of particles present within a cloud is important for determining accurate inputs to climate models. The chemical composition of particles within cloud liquid droplets and ice crystals can have a significant impact on the timing, location, and amount of precipitation that falls. Precipitation efficiency is increased by the presence of ice crystals in clouds, and both mineral dust and biological aerosols have been shown to be effective ice nucleating particles (INPs) in the atmosphere. A current challenge in aerosol science is distinguishing mineral dust and biological material in the analysis of real-time, ambient, single-particle mass spectral data. Single-particle mass spectrometers are capable of measuring the size-resolved chemical composition of individual atmospheric particles. However, there is no consistent analytical method for distinguishing dust and biological aerosols. Sampling and characterization of control samples (i.e. of known identity) of mineral dust and bacteria were performed by the Aerosol Time-of-Flight Mass Spectrometer (ATOFMS) as part of the Fifth Ice Nucleation (FIN01) Workshop at the Aerosol Interaction and Dynamics in the Atmosphere (AIDA) facility in Karlsruhe, Germany. Using data collected by the ATOFMS of control samples, a new metric has been developed to classify single particles as dust or biological independent of spectral cluster analysis. This method, involving the use of a ratio of mass spectral peak areas for organic nitrogen and silicates, is easily reproducible and does not rely on extensive knowledge of particle chemistry or the ionization characteristics of mass spectrometers. This represents a step toward rapidly distinguishing particle types responsible for ice nucleation activity during real-time sampling in clouds.
NASA Astrophysics Data System (ADS)
Le Marshall, J.; Jung, J.; Lord, S. J.; Derber, J. C.; Treadon, R.; Joiner, J.; Goldberg, M.; Wolf, W.; Liu, H. C.
2005-08-01
The National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), and Department of Defense (DoD), Joint Center for Satellite Data Assimilation (JCSDA) was established in 2000/2001. The goal of the JCSDA is to accelerate the use of observations from earth-orbiting satellites into operational numerical environmental analysis and prediction systems for the purpose of improving weather and oceanic forecasts, seasonal climate forecasts and the accuracy of climate data sets. As a result, a series of data assimilation experiments were undertaken at the JCSDA as part of the preparations for the operational assimilation of AIRS data by its partner organizations1,2. Here, for the first time full spatial resolution radiance data, available in real-time from the AIRS instrument, were used at the JCSDA in data assimilation studies over the globe utilizing the operational NCEP Global Forecast System (GFS). The radiance data from each channel of the instrument were carefully screened for cloud effects and those radiances which were deemed to be clear of cloud effects were used by the GFS forecast system. The result of these assimilation trials has been a first demonstration of significant improvements in forecast skill over both the Northern and Southern Hemisphere compared to the operational system without AIRS data. The experimental system was designed in a way that rendered it feasible for operational application, and that constraint involved using the subset of AIRS channels chosen for operational distribution and an analysis methodology close to the current analysis practice, with particular consideration given to time limitations. As a result, operational application of these AIRS data was enabled by the recent NCEP operational upgrade. In addition, because of the improved impact resulting from use of this enhanced data set compared to that used operationally to date, provision of a realtime "warmest field" of view data set has been established for use by international NWP Centers.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas
2016-04-01
The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Consolidation of cloud computing in ATLAS
NASA Astrophysics Data System (ADS)
Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration
2017-10-01
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
Two cloud-based cues for estimating scene structure and camera calibration.
Jacobs, Nathan; Abrams, Austin; Pless, Robert
2013-10-01
We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.
A Novel Cost Based Model for Energy Consumption in Cloud Computing
Horri, A.; Dastghaibyfard, Gh.
2015-01-01
Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716
A novel cost based model for energy consumption in cloud computing.
Horri, A; Dastghaibyfard, Gh
2015-01-01
Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.
Real-time Volcanic Cloud Products and Predictions for Aviation Alerts
NASA Astrophysics Data System (ADS)
Krotkov, N. A.; Hughes, E. J.; da Silva, A. M., Jr.; Seftor, C. J.; Brentzel, K. W.; Hassinen, S.; Heinrichs, T. A.; Schneider, D. J.; Hoffman, R.; Myers, T.; Flynn, L. E.; Niu, J.; Theys, N.; Brenot, H. H.
2016-12-01
We will discuss progress of the NASA ASP project, which promotes the use of satellite volcanic SO2 (VSO2) and Ash (VA) data, and forecasting tools that enhance VA Decision Support Systems (DSS) at the VA Advisory Centers (VAACs) for prompt aviation warnings. The goals are: (1) transition NASA algorithms to NOAA for global NRT processing and integration into DSS at Washington VAAC for operational users and public dissemination; (2) Utilize Direct Broadcast capability of the Aura and SNPP satellites to process Direct Readout (DR) data at two high latitude locations in Finland and Fairbanks, Alaska to enhance VA DSS in Europe and at USGS's Alaska Volcano Observatory (AVO) and Alaska-VAAC; (3) Improve global Eulerian model-based VA/VSO2 forecasting and risk/cost assessments with Metron Aviation. Our global NRT OMI and OMPS data have been fully integrated into European Support to Aviation Control Service and NOAA operational web sites. We are transitioning OMPS processing to our partners at NOAA/NESDIS to integrate into operational processing environment. NASA's Suomi NPP Ozone Science Team, in conjunction with GSFC's Direct Readout Laboratory (DRL), have implemented Version 2 of the OMPS real-time DR processing package to generate VSO2 and VA products at the Geographic Information Network of Alaska (GINA) and the Finnish Meteorological Institute (FMI). The system provides real-time coverage over some of the most congested airspace and over many of the most active volcanoes in the world. The OMPS real time capability is now publicly available via DRL's IPOPP package. We use satellite observations to define volcanic source term estimates in the NASA GOES-5 model, which was updated allowing for the simulation of VA and VSO2 clouds. Column SO2 observations from SNPP/OMPS provide an initial estimate of the total cloud SO2 mass, and are used with backward transport analysis to make an initial cloud height estimate. Later VSO2 observations are used to "nudge" the SO2 mass within the model. The GEOS-5 simulations provide qualitative forecasts, which locate the extent of regions hazardous to aviation. Air traffic flow algorithms have been developed by Metron Aviation to use GEOS-5 volcanic simulations to determine the most cost-effective rerouting paths around hazardous volcanic clouds.
Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks
2017-01-01
In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities’ authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol. PMID:28946633
Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks.
Nkenyereye, Lewis; Kwon, Joonho; Choi, Yoon-Ho
2017-09-23
In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities' authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol.
Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy
Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca
2014-01-01
Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260
Entrainment in Laboratory Simulations of Cumulus Cloud Flows
NASA Astrophysics Data System (ADS)
Narasimha, R.; Diwan, S.; Subrahmanyam, D.; Sreenivas, K. R.; Bhat, G. S.
2010-12-01
A variety of cumulus cloud flows, including congestus (both shallow bubble and tall tower types), mediocris and fractus have been generated in a water tank by simulating the release of latent heat in real clouds. The simulation is achieved through ohmic heating, injected volumetrically into the flow by applying suitable voltages between diametral cross-sections of starting jets and plumes of electrically conducting fluid (acidified water). Dynamical similarity between atmospheric and laboratory cloud flows is achieved by duplicating values of an appropriate non-dimensional heat release number. Velocity measurements, made by laser instrumentation, show that the Taylor entrainment coefficient generally increases just above the level of commencement of heat injection (corresponding to condensation level in the real cloud). Subsequently the coefficient reaches a maximum before declining to the very low values that characterize tall cumulus towers. The experiments also simulate the protected core of real clouds. Cumulus Congestus : Atmospheric cloud (left), simulated laboratory cloud (right). Panels below show respectively total heat injected and vertical profile of heating in the laboratory cloud.
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the west, from between the cloud layers and over the patchy white clouds to the east of the hotspot. This is probably an area where moist convection is occurring over large horizontal distances, similar to the atmosphere over the equatorial ocean on Earth. The clouds are high and thick, and are observed to change rapidly over short time scales.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
Diagnosing turbulence for research aircraft safety using open source toolkits
NASA Astrophysics Data System (ADS)
Lang, T. J.; Guy, N.
Open source software toolkits have been developed and applied to diagnose in-cloud turbulence in the vicinity of Earth science research aircraft, via analysis of ground-based Doppler radar data. Based on multiple retrospective analyses, these toolkits show promise for detecting significant turbulence well prior to cloud penetrations by research aircraft. A pilot study demonstrated the ability to provide mission scientists turbulence estimates in near real time during an actual field campaign, and thus these toolkits are recommended for usage in future cloud-penetrating aircraft field campaigns.
Novel Real-Time Facial Wound Recovery Synthesis Using Subsurface Scattering
Chin, Seongah
2014-01-01
We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models. PMID:25197721
A new framework for interactive quality assessment with application to light field coding
NASA Astrophysics Data System (ADS)
Viola, Irene; Ebrahimi, Touradj
2017-09-01
In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.
New automatic mode of visualizing the colon via Cine CT
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.
2001-05-01
Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.
TRIDEC Cloud - a Web-based Platform for Tsunami Early Warning tested with NEAMWave14 Scenarios
NASA Astrophysics Data System (ADS)
Hammitzsch, Martin; Spazier, Johannes; Reißland, Sven; Necmioglu, Ocal; Comoglu, Mustafa; Ozer Sozdinler, Ceren; Carrilho, Fernando; Wächter, Joachim
2015-04-01
In times of cloud computing and ubiquitous computing the use of concepts and paradigms introduced by information and communications technology (ICT) have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in research projects new technologies are exploited to implement a cloud-based and web-based platform - the TRIDEC Cloud - to open up new prospects for EWS. The platform in its current version addresses tsunami early warning and mitigation. It merges several complementary external and in-house cloud-based services for instant tsunami propagation calculations and automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The TRIDEC Cloud can be accessed in two different modes, the monitoring mode and the exercise-and-training mode. The monitoring mode provides important functionality required to act in a real event. So far, the monitoring mode integrates historic and real-time sea level data and latest earthquake information. The integration of sources is supported by a simple and secure interface. The exercise and training mode enables training and exercises with virtual scenarios. This mode disconnects real world systems and connects with a virtual environment that receives virtual earthquake information and virtual sea level data re-played by a scenario player. Thus operators and other stakeholders are able to train skills and prepare for real events and large exercises. The GFZ German Research Centre for Geosciences (GFZ), the Kandilli Observatory and Earthquake Research Institute (KOERI), and the Portuguese Institute for the Sea and Atmosphere (IPMA) have used the opportunity provided by NEAMWave14 to test the TRIDEC Cloud as a collaborative activity based on previous partnership and commitments at the European scale. The TRIDEC Cloud has not been involved officially in Part B of the NEAMWave14 scenarios. However, the scenarios have been used by GFZ, KOERI, and IPMA for testing in exercise runs on October 27-28, 2014. Additionally, the Greek NEAMWave14 scenario has been tested in an exercise run by GFZ only on October 29, 2014 (see ICG/NEAMTWS-XI/13). The exercise runs demonstrated that operators in warning centres and stakeholders of other involved parties just need a standard web browser to access a full-fledged TEWS. The integration of GPU accelerated tsunami simulation computations have been an integral part to foster early warning with on-demand tsunami predictions based on actual source parameters. Thus tsunami travel times, estimated times of arrival and estimated wave heights are available immediately for visualization and for further analysis and processing. The generation of warning messages is based on internationally agreed message structures and includes static and dynamic information based on earthquake information, instant computations of tsunami simulations, and actual measurements. Generated messages are served for review, modification, and addressing in one simple form for dissemination via Cloud Messages, Shared Maps, e-mail, FTP/GTS, SMS, and FAX. Cloud Messages and Shared Maps are complementary channels and integrate interactive event and simulation data. Thus recipients are enabled to interact dynamically with a map and diagrams beyond traditional text information.
Villard, P F; Vidal, F P; Hunt, C; Bello, F; John, N W; Johnson, S; Gould, D A
2009-11-01
We present here a simulator for interventional radiology focusing on percutaneous transhepatic cholangiography (PTC). This procedure consists of inserting a needle into the biliary tree using fluoroscopy for guidance. The requirements of the simulator have been driven by a task analysis. The three main components have been identified: the respiration, the real-time X-ray display (fluoroscopy) and the haptic rendering (sense of touch). The framework for modelling the respiratory motion is based on kinematics laws and on the Chainmail algorithm. The fluoroscopic simulation is performed on the graphic card and makes use of the Beer-Lambert law to compute the X-ray attenuation. Finally, the haptic rendering is integrated to the virtual environment and takes into account the soft-tissue reaction force feedback and maintenance of the initial direction of the needle during the insertion. Five training scenarios have been created using patient-specific data. Each of these provides the user with variable breathing behaviour, fluoroscopic display tuneable to any device parameters and needle force feedback. A detailed task analysis has been used to design and build the PTC simulator described in this paper. The simulator includes real-time respiratory motion with two independent parameters (rib kinematics and diaphragm action), on-line fluoroscopy implemented on the Graphics Processing Unit and haptic feedback to feel the soft-tissue behaviour of the organs during the needle insertion.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
The application of cloud computing to scientific workflows: a study of cost and performance.
Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S
2013-01-28
The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.
Rendering of dense, point cloud data in a high fidelity driving simulator.
DOT National Transportation Integrated Search
2014-09-01
Driving Simulators are advanced tools that can address many research questions in transportation. Recently they have been used to advance the practice of transportation engineering, specifically signs, signals, pavement markings, and most powerfully ...
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Generic-distributed framework for cloud services marketplace based on unified ontology.
Hasan, Samer; Valli Kumari, V
2017-11-01
Cloud computing is a pattern for delivering ubiquitous and on demand computing resources based on pay-as-you-use financial model. Typically, cloud providers advertise cloud service descriptions in various formats on the Internet. On the other hand, cloud consumers use available search engines (Google and Yahoo) to explore cloud service descriptions and find the adequate service. Unfortunately, general purpose search engines are not designed to provide a small and complete set of results, which makes the process a big challenge. This paper presents a generic-distrusted framework for cloud services marketplace to automate cloud services discovery and selection process, and remove the barriers between service providers and consumers. Additionally, this work implements two instances of generic framework by adopting two different matching algorithms; namely dominant and recessive attributes algorithm borrowed from gene science and semantic similarity algorithm based on unified cloud service ontology. Finally, this paper presents unified cloud services ontology and models the real-life cloud services according to the proposed ontology. To the best of the authors' knowledge, this is the first attempt to build a cloud services marketplace where cloud providers and cloud consumers can trend cloud services as utilities. In comparison with existing work, semantic approach reduced the execution time by 20% and maintained the same values for all other parameters. On the other hand, dominant and recessive attributes approach reduced the execution time by 57% but showed lower value for recall.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Astrophysics Data System (ADS)
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Opportunity and Challenges for Migrating Big Data Analytics in Cloud
NASA Astrophysics Data System (ADS)
Amitkumar Manekar, S.; Pradeepini, G., Dr.
2017-08-01
Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.
NASA Technical Reports Server (NTRS)
Spruce, Joseph; Hargrove, William; Gasser, Gerald
2013-01-01
This presentation discusses the development of anew method for computing NDVI temporal composites from near real time eMODIS data This research is being conducted to improve forest change products used in the ForWarn system for monitoring regional forest disturbances in the United States. ForWarn provides nation-wide NDVI-based forest disturbance detection products that are refreshed every 8 days. Current eMODIS and historical MOD13 24 day NDVI data are used to compute the disturbance detection products. The eMODIS 24 day NDVI data re-aggregated from 7 day NDVI products. The 24 day eMODIS NDVIs are generally cloud free, but do not necessarily use the freshest quality data. To shorten the disturbance detection time, a method has been developed that performs adaptive length/maximum value compositing of eMODIS NDVI, along with cloud and shadow "noise" mitigation. Tests indicate that this method can reduce detection rates by 8-16 days for known recent disturbance events, depending on the cloud frequencies and disturbance type. The noise mitigation in these tests, though imperfect, helped to improve quality of the resulting NDVI and forest change products.
Effect of Clouds on Apertures of Space-based Air Fluorescence Detectors
NASA Technical Reports Server (NTRS)
Sokolsky, P.; Krizmanic, J.
2003-01-01
Space-based ultra-high-energy cosmic ray detectors observe fluorescence light from extensive air showers produced by these particles in the troposphere. Clouds can scatter and absorb this light and produce systematic errors in energy determination and spectrum normalization. We study the possibility of using IR remote sensing data from MODIS and GOES satellites to delimit clear areas of the atmosphere. The efficiency for detecting ultra-high-energy cosmic rays whose showers do not intersect clouds is determined for real, night-time cloud scenes. We use the MODIS SST cloud mask product to define clear pixels for cloud scenes along the equator and use the OWL Monte Carlo to generate showers in the cloud scenes. We find the efficiency for cloud-free showers with closest approach of three pixels to a cloudy pixel is 6.5% exclusive of other factors. We conclude that defining a totally cloud-free aperture reduces the sensitivity of space-based fluorescence detectors to unacceptably small levels.
Evolution of the Debris Cloud Generated by the Fengyun-1C Fragmentation Event
NASA Technical Reports Server (NTRS)
Pardini, Carmen; Anselmo, Luciano
2007-01-01
The cloud of cataloged debris produced in low earth orbit by the fragmentation of the Fengyun-1C spacecraft was propagated for 15 years, taking into account all relevant perturbations. Unfortunately, the cloud resulted to be very stable, not suffering substantial debris decay during the time span considered. The only significant short term evolution was the differential spreading of the orbital planes of the fragments, leading to the formation of a debris shell around the earth approximately 7-8 months after the breakup, and the perigee precession of the elliptical orbits. Both effects will render the shell more "isotropic" in the coming years. The immediate consequence of the Chinese anti-satellite test, carried out in an orbital regime populated by many important operational satellites, was to increase significantly the probability of collision with man-made debris. For the two Italian spacecraft launched in the first half of 2007, the collision probability with cataloged objects increased by 12% for AGILE, in equatorial orbit, and by 38% for COSMO-SkyMed 1, in sun-synchronous orbit.
A Simple Technique for Securing Data at Rest Stored in a Computing Cloud
NASA Astrophysics Data System (ADS)
Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai
"Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.
NASA Astrophysics Data System (ADS)
Fehm, Thomas Felix; Deán-Ben, Xosé Luís; Razansky, Daniel
2014-10-01
Ultrasonography and optoacoustic imaging share powerful advantages related to the natural aptitude for real-time image rendering with high resolution, the hand-held operation, and lack of ionizing radiation. The two methods also possess very different yet highly complementary advantages of the mechanical and optical contrast in living tissues. Nonetheless, efficient integration of these modalities remains challenging owing to the fundamental differences in the underlying physical contrast, optimal signal acquisition, and image reconstruction approaches. We report on a method for hybrid acquisition and reconstruction of three-dimensional pulse-echo ultrasound and optoacoustic images in real time based on passive ultrasound generation with an optical absorber, thus avoiding the hardware complexity of active ultrasound generation. In this way, complete hybrid datasets are generated with a single laser interrogation pulse, resulting in simultaneous rendering of ultrasound and optoacoustic images at an unprecedented rate of 10 volumetric frames per second. Performance is subsequently showcased in phantom experiments and in-vivo measurements from a healthy human volunteer, confirming general clinical applicability of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehm, Thomas Felix; Razansky, Daniel, E-mail: dr@tum.de; Faculty of Medicine, Technische Universität München, Munich
2014-10-27
Ultrasonography and optoacoustic imaging share powerful advantages related to the natural aptitude for real-time image rendering with high resolution, the hand-held operation, and lack of ionizing radiation. The two methods also possess very different yet highly complementary advantages of the mechanical and optical contrast in living tissues. Nonetheless, efficient integration of these modalities remains challenging owing to the fundamental differences in the underlying physical contrast, optimal signal acquisition, and image reconstruction approaches. We report on a method for hybrid acquisition and reconstruction of three-dimensional pulse-echo ultrasound and optoacoustic images in real time based on passive ultrasound generation with an opticalmore » absorber, thus avoiding the hardware complexity of active ultrasound generation. In this way, complete hybrid datasets are generated with a single laser interrogation pulse, resulting in simultaneous rendering of ultrasound and optoacoustic images at an unprecedented rate of 10 volumetric frames per second. Performance is subsequently showcased in phantom experiments and in-vivo measurements from a healthy human volunteer, confirming general clinical applicability of the method.« less
Developing cloud-based Business Process Management (BPM): a survey
NASA Astrophysics Data System (ADS)
Mercia; Gunawan, W.; Fajar, A. N.; Alianto, H.; Inayatulloh
2018-03-01
In today’s highly competitive business environment, modern enterprises are dealing difficulties to cut unnecessary costs, eliminate wastes and delivery huge benefits for the organization. Companies are increasingly turning to a more flexible IT environment to help them realize this goal. For this reason, the article applies cloud based Business Process Management (BPM) that enables to focus on modeling, monitoring and process management. Cloud based BPM consists of business processes, business information and IT resources, which help build real-time intelligence systems, based on business management and cloud technology. Cloud computing is a paradigm that involves procuring dynamically measurable resources over the internet as an IT resource service. Cloud based BPM service enables to address common problems faced by traditional BPM, especially in promoting flexibility, event-driven business process to exploit opportunities in the marketplace.
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.
2004-01-01
Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.
Amplitude interpretation and visualization of three-dimensional reflection data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enachescu, M.E.
1994-07-01
Digital recording and processing of modern three-dimensional surveys allow for relative good preservation and correct spatial positioning of seismic reflection amplitude. A four-dimensional seismic reflection field matrix R (x,y,t,A), which can be computer visualized (i.e., real-time interactively rendered, edited, and animated), is now available to the interpreter. The amplitude contains encoded geological information indirectly related to lithologies and reservoir properties. The magnitude of the amplitude depends not only on the acoustic impedance contrast across a boundary, but is also strongly affected by the shape of the reflective boundary. This allows the interpreter to image subtle tectonic and structural elements notmore » obvious on time-structure maps. The use of modern workstations allows for appropriate color coding of the total available amplitude range, routine on-screen time/amplitude extraction, and late display of horizon amplitude maps (horizon slices) or complex amplitude-structure spatial visualization. Stratigraphic, structural, tectonic, fluid distribution, and paleogeographic information are commonly obtained by displaying the amplitude variation A = A(x,y,t) associated with a particular reflective surface or seismic interval. As illustrated with several case histories, traditional structural and stratigraphic interpretation combined with a detailed amplitude study generally greatly enhance extraction of subsurface geological information from a reflection data volume. In the context of three-dimensional seismic surveys, the horizon amplitude map (horizon slice), amplitude attachment to structure and [open quotes]bright clouds[close quotes] displays are very powerful tools available to the interpreter.« less
GPU-based real-time soft tissue deformation with cutting and haptic feedback.
Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane
2010-12-01
This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Operations Concepts for Deep-Space Missions: Challenges and Opportunities
NASA Technical Reports Server (NTRS)
McCann, Robert S.
2010-01-01
Historically, manned spacecraft missions have relied heavily on real-time communication links between crewmembers and ground control for generating crew activity schedules and working time-critical off-nominal situations. On crewed missions beyond the Earth-Moon system, speed-of-light limitations will render this ground-centered concept of operations obsolete. A new, more distributed concept of operations will have to be developed in which the crew takes on more responsibility for real-time anomaly diagnosis and resolution, activity planning and replanning, and flight operations. I will discuss the innovative information technologies, human-machine interfaces, and simulation capabilities that must be developed in order to develop, test, and validate deep-space mission operations
Real-Time Mapping Spectroscopy on the Ground, in the Air, and in Space
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Allwood, A.; Chien, S.; Green, R. O.; Wettergreen, D. S.
2016-12-01
Real-time data interpretation can benefit both remote in situ exploration and remote sensing. Basic analyses at the sensor can monitor instrument performance and reveal invisible science phenomena in real time. This promotes situational awareness for remote robotic explorers or campaign decision makers, enabling adaptive data collection, reduced downlink requirements, and coordinated multi-instrument observations. Fast analysis is ideal for mapping spectrometers providing unambiguous, quantitative geophysical measurements. This presentation surveys recent computational advances in real-time spectroscopic analysis for Earth science and planetary exploration. Spectral analysis at the sensor enables new operations concepts that significantly improve science yield. Applications include real-time detection of fugitive greenhouse emissions by airborne monitoring, real-time cloud screening and mineralogical mapping by orbital spectrometers, and adaptive measurement by the PIXL instrument on the Mars 2020 rover. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.
NASA Astrophysics Data System (ADS)
DeFelice, T. P.; Axisa, Duncan
2017-09-01
This paper builds upon the processes and framework already established for identifying, integrating and testing an unmanned aircraft system (UAS) with sensing technology for use in rainfall enhancement cloud seeding programs to carry out operational activities or to monitor and evaluate seeding operations. We describe the development and assessment methodologies of an autonomous and adaptive UAS platform that utilizes in-situ real time data to sense, target and implement seeding. The development of a UAS platform that utilizes remote and in-situ real-time data to sense, target and implement seeding deployed with a companion UAS ensures optimal, safe, secure, cost-effective seeding operations, and the dataset to quantify the results of seeding. It also sets the path for an innovative, paradigm shifting approach for enhancing precipitation independent of seeding mode. UAS technology is improving and their application in weather modification must be explored to lay the foundation for future implementation. The broader significance lies in evolving improved technology and automating cloud seeding operations that lowers the cloud seeding operational footprint and optimizes their effectiveness and efficiency, while providing the temporal and spatial sensitivities to overcome the predictability or sparseness of environmental parameters needed to identify conditions suitable for seeding, and how such might be implemented. The dataset from the featured approach will contain data from concurrent Eulerian and Lagrangian perspectives over sub-cloud scales that will facilitate the development of cloud seeding decision support tools.
Wood, Andrea
2013-12-01
This work explores disability in the cultural context of contemporary Japanese comics. In contrast to Western comics, Japanese manga have permeated the social fabric of Japan to the extent that vast numbers of people read manga on a daily basis. It has, in fact, become such a popular medium for visual communication that the Japanese government and education systems utilize manga as a social acculturation and teaching tool. This multibillion dollar industry is incredibly diverse, and one particularly popular genre is sports manga. However, Inoue Takehiko's award-winning manga series REAL departs from more conventional sports manga, which typically focus on able-bodied characters with sometimes exaggerated superhuman physical abilities, by adopting a more realistic approach to the world of wheelchair basketball and the people who play it. At the same time REAL explores cultural attitudes toward disability in Japanese culture-where disability is at times rendered "invisible" either through accessibility problems or lingering associations of disability and shame. It is therefore extremely significant that manga, a visual medium, is rendering disability visible-the ultimate movement from margin to center. REAL devotes considerable attention to realistically illustrating the lived experiences of its characters both on and off the court. Consequently, the series not only educates readers about wheelchair basketball but also provides compelling insight into Japanese cultural notions about masculinity, family, responsibility, and identity. The basketball players-at first marginalized by their disability-join together in the unity of a sport typically characterized by its "abledness."
Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study.
Dunn, B E; Almagro, U A; Choi, H; Sheth, N K; Arnold, J S; Recla, D L; Krupinski, E A; Graham, A R; Weinstein, R S
1997-01-01
In this retrospective study, we assess the accuracy, confidence levels, and viewing times of two generalist pathologists using both dynamic-robotic telepathology and conventional light microscopy (LM) to render diagnoses on a test set of 100 consecutive routine surgical pathology cases. The objective is to determine whether telepathology will allow a pathology group practice at a diagnostic hub to provide routine diagnostic services to a remote hospital without an on-site pathologist. For TP, glass slides were placed on the motorized stage of the robotic microscope of a telepathology system by a senior laboratory technologist in Iron Mountain, MI. Real-time control of the motorized microscope was then transferred to a pathologist in Milwaukee, WI, who viewed images of the glass slides on a video monitor. The telepathologists deferred rendering a diagnosis in 1.5% of cases. Clinically important concordance between the individual diagnoses rendered by telepathology and the "truth" diagnoses established by rereview of glass slides was 98.5%. In the telepathology mode, there were five incorrect diagnoses out of a total of 197 diagnoses. In four cases in which the telepathology diagnosis was incorrect, the pathologist's diagnosis by LM was identical to that rendered by telepathology. These represent errors of interpretation and cannot be ascribed to telepathology. The certainty of the pathologists with respect to their diagnoses was evaluated over time. Results for the first 50 cases served as baseline data. For the second 50 cases, confidence in rendering a diagnosis in the telepathology mode was essentially identical to that of making a diagnosis in the LM viewing mode. Viewing times in the telepathology mode also improved with more experience using the telepathology system. These results support the concept that an off-site pathologist using dynamic-robotic telepathology can substitute for an on-site pathologist as a service provider.
Real-Time Prediction of Tropical Cyclone Intensity Using COAMPS-TC
2012-01-01
tropospheric (UT) cloud fields (i.e., cirrus clouds) long after the initial eruption cycle from gradual particle settling and re-entrainment back into the... troposphere . Volcanic sul- fur dioxide and hydrogen sulfide vapor molecules are photo-oxidized in the LS, forming gaseous sulphuric acid, which in...concentration over the eastern United States at 1815 UTC on the 17th shown in Fig. 5(a), derived from NASA Ozone Monitoring Instrument (OMI) measurements
Impact of aerosol intrusions on sea-ice melting rates and the structure Arctic boundary layer clouds
NASA Astrophysics Data System (ADS)
Cotton, W.; Carrio, G.; Jiang, H.
2003-04-01
The Los Alamos National Laboratory sea-ice model (LANL CICE) was implemented into the real-time and research versions of the Colorado State University-Regional Atmospheric Modeling System (RAMS@CSU). The original version of CICE was modified in its structure to allow module communication in an interactive multigrid framework. In addition, some improvements have been made in the routines involved in the coupling, among them, the inclusion of iterative methods that consider variable roughness lengths for snow-covered ice thickness categories. This version of the model also includes more complex microphysics that considers the nucleation of cloud droplets, allowing the prediction of mixing ratios and number concentrations for all condensed water species. The real-time version of RAMS@CSU automatically processes the NASA Team SSMI F13 25km sea-ice coverage data; the data are objectively analyzed and mapped to the model grid configuration. We performed two types of cloud resolving simulations to assess the impact of the entrainment of aerosols from above the inversion on Arctic boundary layer clouds. The first series of numerical experiments corresponds to a case observed on May 4 1998 during the FIRE-ACE/SHEBA field experiment. Results indicate a significant impact on the microstructure of the simulated clouds. When assuming polluted initial profiles above the inversion, the liquid water fraction of the cloud monotonically decreases, the total condensate paths increases and downward IR tends to increase due to a significant increase in the ice water path. The second set of cloud resolving simulations focused on the evaluation of the potential effect of aerosol concentration above the inversion on melting rates during spring-summer period. For these multi-month simulations, the IFN and CCN profiles were also initialized assuming the 4 May profiles as benchmarks. Results suggest that increasing the aerosol concentrations above the boundary layer increases sea-ice melting rates when mixed phase clouds are present.
[Remote Slit Lamp Microscope Consultation System Based on Web].
Chen, Junfa; Zhuo, Yong; Liu, Zuguo; Chen, Yanping
2015-11-01
To realize the remote operation of the slit lamp microscope for department of ophthalmology consultation, and visual display the real-time status of remote slit lamp microscope, a remote slit lamp microscope consultation system based on B/S structure is designed and implemented. Through framing the slit lamp microscope on the website system, the realtime acquisition and transmission of remote control and image data is realized. The three dimensional model of the slit lamp microscope is established and rendered on the web by using WebGL technology. The practical application results can well show the real-time interactive of the remote consultation system.
Real-time simulation of thermal shadows with EMIT
NASA Astrophysics Data System (ADS)
Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul
2016-05-01
Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.
User Inspired Management of Scientific Jobs in Grids and Clouds
ERIC Educational Resources Information Center
Withana, Eran Chinthaka
2011-01-01
From time-critical, real time computational experimentation to applications which process petabytes of data there is a continuing search for faster, more responsive computing platforms capable of supporting computational experimentation. Weather forecast models, for instance, process gigabytes of data to produce regional (mesoscale) predictions on…
Manga Vectorization and Manipulation with Procedural Simple Screentone.
Yao, Chih-Yuan; Hung, Shih-Hsuan; Li, Guo-Wei; Chen, I-Yu; Adhitya, Reza; Lai, Yu-Chi
2017-02-01
Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.
Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU
NASA Astrophysics Data System (ADS)
Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.
2007-03-01
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.
Use of cloud computing in biomedicine.
Sobeslav, Vladimir; Maresova, Petra; Krejcar, Ondrej; Franca, Tanos C C; Kuca, Kamil
2016-12-01
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.
Survey on Security Issues in File Management in Cloud Computing Environment
NASA Astrophysics Data System (ADS)
Gupta, Udit
2015-06-01
Cloud computing has pervaded through every aspect of Information technology in past decade. It has become easier to process plethora of data, generated by various devices in real time, with the advent of cloud networks. The privacy of users data is maintained by data centers around the world and hence it has become feasible to operate on that data from lightweight portable devices. But with ease of processing comes the security aspect of the data. One such security aspect is secure file transfer either internally within cloud or externally from one cloud network to another. File management is central to cloud computing and it is paramount to address the security concerns which arise out of it. This survey paper aims to elucidate the various protocols which can be used for secure file transfer and analyze the ramifications of using each protocol.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.
2003-01-01
Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique (i.e. is 2D or semi-3D CRM appropriate for the super-parameterization?); (2) calculate and examine the surface energy (especially radiation) and water budgets; (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.
Potential value of satellite cloud pictures in weather modification projects
NASA Technical Reports Server (NTRS)
Biswas, K. R.
1972-01-01
Satellite imagery for one project season of cloud seeding programs in the northern Great Plains has been surveyed for its probable usefulness in weather modification programs. The research projects and the meteorological information available are described. A few illustrative examples of satellite imagery analysis are cited and discussed, along with local observations of weather and the seeding decisions made in the research program. This analysis indicates a definite correlation between satellite-observed cloud patterns and the types of cloud seeding activity undertaken, and suggests a high probability of better and/or earlier decisions if the imagery is available in real time. Infrared imagery provides better estimates of cloud height which can be useful in assessing the possibility of a hail threat. The satellite imagery appears to be of more value to area-seeding projects than to single-cloud seeding experiments where the imagery is of little value except as an aid in local forecasting and analysis.
Remote sensing-based detection and quantification of roadway debris following natural disasters
NASA Astrophysics Data System (ADS)
Axel, Colin; van Aardt, Jan A. N.; Aros-Vera, Felipe; Holguín-Veras, José
2016-05-01
Rapid knowledge of road network conditions is vital to formulate an efficient emergency response plan following any major disaster. Fallen buildings, immobile vehicles, and other forms of debris often render roads impassable to responders. The status of roadways is generally determined through time and resource heavy methods, such as field surveys and manual interpretation of remotely sensed imagery. Airborne lidar systems provide an alternative, cost-effective option for performing network assessments. The 3D data can be collected quickly over a wide area and provide valuable insight about the geometry and structure of the scene. This paper presents a method for automatically detecting and characterizing debris in roadways using airborne lidar data. Points falling within the road extent are extracted from the point cloud and clustered into individual objects using region growing. Objects are classified as debris or non-debris using surface properties and contextual cues. Debris piles are reconstructed as surfaces using alpha shapes, from which an estimate of debris volume can be computed. Results using real lidar data collected after a natural disaster are presented. Initial results indicate that accurate debris maps can be automatically generated using the proposed method. These debris maps would be an invaluable asset to disaster management and emergency response teams attempting to reach survivors despite a crippled transportation network.
Immersive Earth Science: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.; Ramirez-Linan, R.
2017-12-01
Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.
Sahoo, Satya S; Jayapandian, Catherine; Garg, Gaurav; Kaffashi, Farhad; Chung, Stephanie; Bozorgi, Alireza; Chen, Chien-Hun; Loparo, Kenneth; Lhatoo, Samden D; Zhang, Guo-Qiang
2014-01-01
Objective The rapidly growing volume of multimodal electrophysiological signal data is playing a critical role in patient care and clinical research across multiple disease domains, such as epilepsy and sleep medicine. To facilitate secondary use of these data, there is an urgent need to develop novel algorithms and informatics approaches using new cloud computing technologies as well as ontologies for collaborative multicenter studies. Materials and methods We present the Cloudwave platform, which (a) defines parallelized algorithms for computing cardiac measures using the MapReduce parallel programming framework, (b) supports real-time interaction with large volumes of electrophysiological signals, and (c) features signal visualization and querying functionalities using an ontology-driven web-based interface. Cloudwave is currently used in the multicenter National Institute of Neurological Diseases and Stroke (NINDS)-funded Prevention and Risk Identification of SUDEP (sudden unexplained death in epilepsy) Mortality (PRISM) project to identify risk factors for sudden death in epilepsy. Results Comparative evaluations of Cloudwave with traditional desktop approaches to compute cardiac measures (eg, QRS complexes, RR intervals, and instantaneous heart rate) on epilepsy patient data show one order of magnitude improvement for single-channel ECG data and 20 times improvement for four-channel ECG data. This enables Cloudwave to support real-time user interaction with signal data, which is semantically annotated with a novel epilepsy and seizure ontology. Discussion Data privacy is a critical issue in using cloud infrastructure, and cloud platforms, such as Amazon Web Services, offer features to support Health Insurance Portability and Accountability Act standards. Conclusion The Cloudwave platform is a new approach to leverage of large-scale electrophysiological data for advancing multicenter clinical research. PMID:24326538
Sahoo, Satya S; Jayapandian, Catherine; Garg, Gaurav; Kaffashi, Farhad; Chung, Stephanie; Bozorgi, Alireza; Chen, Chien-Hun; Loparo, Kenneth; Lhatoo, Samden D; Zhang, Guo-Qiang
2014-01-01
The rapidly growing volume of multimodal electrophysiological signal data is playing a critical role in patient care and clinical research across multiple disease domains, such as epilepsy and sleep medicine. To facilitate secondary use of these data, there is an urgent need to develop novel algorithms and informatics approaches using new cloud computing technologies as well as ontologies for collaborative multicenter studies. We present the Cloudwave platform, which (a) defines parallelized algorithms for computing cardiac measures using the MapReduce parallel programming framework, (b) supports real-time interaction with large volumes of electrophysiological signals, and (c) features signal visualization and querying functionalities using an ontology-driven web-based interface. Cloudwave is currently used in the multicenter National Institute of Neurological Diseases and Stroke (NINDS)-funded Prevention and Risk Identification of SUDEP (sudden unexplained death in epilepsy) Mortality (PRISM) project to identify risk factors for sudden death in epilepsy. Comparative evaluations of Cloudwave with traditional desktop approaches to compute cardiac measures (eg, QRS complexes, RR intervals, and instantaneous heart rate) on epilepsy patient data show one order of magnitude improvement for single-channel ECG data and 20 times improvement for four-channel ECG data. This enables Cloudwave to support real-time user interaction with signal data, which is semantically annotated with a novel epilepsy and seizure ontology. Data privacy is a critical issue in using cloud infrastructure, and cloud platforms, such as Amazon Web Services, offer features to support Health Insurance Portability and Accountability Act standards. The Cloudwave platform is a new approach to leverage of large-scale electrophysiological data for advancing multicenter clinical research.
HPC on Competitive Cloud Resources
NASA Astrophysics Data System (ADS)
Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff
Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.
NASA Technical Reports Server (NTRS)
Jeong, Myeong-Jae; Li, Zhanqing
2010-01-01
Aerosol optical thickness (AOT) is one of aerosol parameters that can be measured on a routine basis with reasonable accuracy from Sun-photometric observations at the surface. However, AOT-derived near clouds is fraught with various real effects and artifacts, posing a big challenge for studying aerosol and cloud interactions. Recently, several studies have reported correlations between AOT and cloud cover, pointing to potential cloud contamination and the aerosol humidification effect; however, not many quantitative assessments have been made. In this study, various potential causes of apparent correlations are investigated in order to separate the real effects from the artifacts, using well-maintained observations from the Aerosol Robotic Network, Total Sky Imager, airborne nephelometer, etc., over the Southern Great Plains site operated by the U.S. Department of Energy's Atmospheric Radiation Measurement Program. It was found that aerosol humidification effects can explain about one fourth of the correlation between the cloud cover and AOT. New particle genesis, cloud-processed particles, atmospheric dynamics, and aerosol indirect effects are likely to be contributing to as much as the remaining three fourth of the relationship between cloud cover and AOT.
Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering
NASA Astrophysics Data System (ADS)
Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi
2015-04-01
This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.
NASA Astrophysics Data System (ADS)
Webley, P. W.; Lopez, T. M.; Ekstrand, A. L.; Dean, K. G.; Rinkleff, P.; Dehn, J.; Cahill, C. F.; Wessels, R. L.; Bailey, J. E.; Izbekov, P.; Worden, A.
2013-06-01
Volcanoes often erupt explosively and generate a variety of hazards including volcanic ash clouds and gaseous plumes. These clouds and plumes are a significant hazard to the aviation industry and the ground features can be a major hazard to local communities. Here, we provide a chronology of the 2009 Redoubt Volcano eruption using frequent, low spatial resolution thermal infrared (TIR), mid-infrared (MIR) and ultraviolet (UV) satellite remote sensing data. The first explosion of the 2009 eruption of Redoubt Volcano occurred on March 15, 2009 (UTC) and was followed by a series of magmatic explosive events starting on March 23 (UTC). From March 23-April 4 2009, satellites imaged at least 19 separate explosive events that sent ash clouds up to 18 km above sea level (ASL) that dispersed ash across the Cook Inlet region. In this manuscript, we provide an overview of the ash clouds and plumes from the 19 explosive events, detailing their cloud-top heights and discussing the variations in infrared absorption signals. We show that the timing of the TIR data relative to the event end time was critical for inferring the TIR derived height and true cloud top height. The ash clouds were high in water content, likely in the form of ice, which masked the negative TIR brightness temperature difference (BTD) signal typically used for volcanic ash detection. The analysis shown here illustrates the utility of remote sensing data during volcanic crises to measure critical real-time parameters, such as cloud-top heights, changes in ground-based thermal activity, and plume/cloud location.
NASA Astrophysics Data System (ADS)
Sheng, C.; Gao, S.; Xue, M.
2006-11-01
With the ARPS (Advanced Regional Prediction System) Data Analysis System (ADAS) and its complex cloud analysis scheme, the reflectivity data from a Chinese CINRAD-SA Doppler radar are used to analyze 3D cloud and hydrometeor fields and in-cloud temperature and moisture. Forecast experiments starting from such initial conditions are performed for a northern China heavy rainfall event to examine the impact of the reflectivity data and other conventional observations on short-range precipitation forecast. The full 3D cloud analysis mitigates the commonly known spin-up problem with precipitation forecast, resulting a significant improvement in precipitation forecast in the first 4 to 5 hours. In such a case, the position, timing and amount of precipitation are all accurately predicted. When the cloud analysis is used without in-cloud temperature adjustment, only the forecast of light precipitation within the first hour is improved. Additional analysis of surface and upper-air observations on the native ARPS grid, using the 1 degree real-time NCEP AVN analysis as the background, helps improve the location and intensity of rainfall forecasting slightly. Hourly accumulated rainfall estimated from radar reflectivity data is found to be less accurate than the model predicted precipitation when full cloud analysis is used.
Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design
NASA Technical Reports Server (NTRS)
Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.
Characteristics of cloud-to-ground lightning flashes along the east coast of the United States
NASA Technical Reports Server (NTRS)
Orville, R. E., Sr.; Pyle, R. B.; Henderson, R. W.; Orville, R. E., Jr.; Weisman, R. A.
1985-01-01
A magnetic direction-finding network for the detection of lightning cloud-to-ground strikes has been installed along the east coast of the United States. Most of the lightning occurring from Maine to Florida and as far west as Ohio is detected. Time, location, flash polarity, stroke count, and peak signal amplitude are recorded in real time. Flash locations, time, and polarity are displayed routinely for research and operational purposes. Flash density maps have been generated for the summers of 1983 and 1984, when the network only extended to North Carolina, and show density maxima in northern Virginia and Maryland.
NASA Astrophysics Data System (ADS)
Chidburee, P.; Mills, J. P.; Miller, P. E.; Fieber, K. D.
2016-06-01
Close-range photogrammetric techniques offer a potentially low-cost approach in terms of implementation and operation for initial assessment and monitoring of landslide processes over small areas. In particular, the Structure-from-Motion (SfM) pipeline is now extensively used to help overcome many constraints of traditional digital photogrammetry, offering increased user-friendliness to nonexperts, as well as lower costs. However, a landslide monitoring approach based on the SfM technique also presents some potential drawbacks due to the difficulty in managing and processing a large volume of data in real-time. This research addresses the aforementioned issues by attempting to combine a mobile device with cloud computing technology to develop a photogrammetric measurement solution as part of a monitoring system for landslide hazard analysis. The research presented here focusses on (i) the development of an Android mobile application; (ii) the implementation of SfM-based open-source software in the Amazon cloud computing web service, and (iii) performance assessment through a simulated environment using data collected at a recognized landslide test site in North Yorkshire, UK. Whilst the landslide monitoring mobile application is under development, this paper describes experiments carried out to ensure effective performance of the system in the future. Investigations presented here describe the initial assessment of a cloud-implemented approach, which is developed around the well-known VisualSFM algorithm. Results are compared to point clouds obtained from alternative SfM 3D reconstruction approaches considering a commercial software solution (Agisoft PhotoScan) and a web-based system (Autodesk 123D Catch). Investigations demonstrate that the cloud-based photogrammetric measurement system is capable of providing results of centimeter-level accuracy, evidencing its potential to provide an effective approach for quantifying and analyzing landslide hazard at a local-scale.
Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.
Holub, Joseph; Winer, Eliot
2017-12-01
Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
Spectral variation during one quasi-periodic oscillation cycle in the black hole candidate H1743-322
NASA Astrophysics Data System (ADS)
Sarathi Pal, Partha; Debnath, Dipak; Chakrabarti, Sandip Kumar
2016-07-01
From the nature of energy dependence of the power density spectra, it is believed that the oscillation of the Compton cloud may be related to low frequency quasi-periodic oscillations (LFQPOs). In the context of two component advective flow (TCAF) solution, the centrifugal pressure supported boundary layer of a transonic flow acts as the Compton cloud. This region undergoes resonance oscillation when cooling time scale roughly agrees with infall time scale as matter crosses this region. By carefully separating photons emitted at different phases of a complete oscillation, we establish beyond reasonable doubt that such an oscillation is the cause of LFQPOs. We show that the degree of Comptonization and therefore the spectral properties of the flow oscillate systematically with the phase of LFQPOs. We analysis the properties of a 0.2Hz LFQPO exhibited by a black hole candidate H 1743-322 using the 3-80 keV data from NuSTAR satellite. This object was chosen because of availability of high quality data for a relatively low frequency oscillation, rendering easy phase-wise of separation of the light curve data.
Cloud based intelligent system for delivering health care as a service.
Kaur, Pankaj Deep; Chana, Inderveer
2014-01-01
The promising potential of cloud computing and its convergence with technologies such as mobile computing, wireless networks, sensor technologies allows for creation and delivery of newer type of cloud services. In this paper, we advocate the use of cloud computing for the creation and management of cloud based health care services. As a representative case study, we design a Cloud Based Intelligent Health Care Service (CBIHCS) that performs real time monitoring of user health data for diagnosis of chronic illness such as diabetes. Advance body sensor components are utilized to gather user specific health data and store in cloud based storage repositories for subsequent analysis and classification. In addition, infrastructure level mechanisms are proposed to provide dynamic resource elasticity for CBIHCS. Experimental results demonstrate that classification accuracy of 92.59% is achieved with our prototype system and the predicted patterns of CPU usage offer better opportunities for adaptive resource elasticity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos
2015-02-01
The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less
Analyzing Visibility Configurations.
Dachsbacher, C
2011-04-01
Many algorithms, such as level of detail rendering and occlusion culling methods, make decisions based on the degree of visibility of an object, but do not analyze the distribution, or structure, of the visible and occluded regions across surfaces. We present an efficient method to classify different visibility configurations and show how this can be used on top of existing methods based on visibility determination. We adapt co-occurrence matrices for visibility analysis and generalize them to operate on clusters of triangular surfaces instead of pixels. We employ machine learning techniques to reliably classify the thus extracted feature vectors. Our method allows perceptually motivated level of detail methods for real-time rendering applications by detecting configurations with expected visual masking. We exemplify the versatility of our method with an analysis of area light visibility configurations in ray tracing and an area-to-area visibility analysis suitable for hierarchical radiosity refinement. Initial results demonstrate the robustness, simplicity, and performance of our method in synthetic scenes, as well as real applications.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders.
Drouin, Simon; Collins, D Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders
Collins, D. Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069
A Real-Time Interactive System for Facial Makeup of Peking Opera
NASA Astrophysics Data System (ADS)
Cai, Feilong; Yu, Jinhui
In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.
An automatic locating system for cloud-to-ground lightning. [which utilizes a microcomputer
NASA Technical Reports Server (NTRS)
Krider, E. P.; Pifer, A. E.; Uman, M. A.
1980-01-01
Automatic locating systems which respond to cloud to ground lightning and which discriminate against cloud discharges and background noise are described. Subsystems of the locating system, which include the direction finder and the position analyzer, are discussed. The direction finder senses the electromagnetic fields radiated by lightning on two orthogonal magnetic loop antennas and on a flat plate electric antenna. The position analyzer is a preprogrammed microcomputer system which automatically computes, maps, and records lightning locations in real time using data inputs from the direction finder. The use of the locating systems for wildfire management and fire weather forecasting is discussed.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
3D Radiative Aspects of the Increased Aerosol Optical Depth Near Clouds
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Wen, Guoyong; Remer, Lorraine; Cahalan, Robert; Coakley, Jim
2007-01-01
To characterize aerosol-cloud interactions it is important to correctly retrieve aerosol optical depth in the vicinity of clouds. It is well reported in the literature that aerosol optical depth increases with cloud cover. Part of the increase comes from real physics as humidification; another part, however, comes from 3D cloud effects in the remote sensing retrievals. In many cases it is hard to say whether the retrieved increased values of aerosol optical depth are remote sensing artifacts or real. In the presentation, we will discuss how the 3D cloud affects can be mitigated. We will demonstrate a simple model that can assess the enhanced illumination of cloud-free columns in the vicinity of clouds. This model is based on the assumption that the enhancement in the cloud-free column radiance comes from the enhanced Rayleigh scattering due to presence of surrounding clouds. A stochastic cloud model of broken cloudiness is used to simulate the upward flux.
Cloud-Based Data Sharing Connects Emergency Managers
NASA Technical Reports Server (NTRS)
2014-01-01
Under an SBIR contract with Stennis Space Center, Baltimore-based StormCenter Communications Inc. developed an improved interoperable platform for sharing geospatial data over the Internet in real time-information that is critical for decision makers in emergency situations.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
Near Real Time Structural Health Monitoring with Multiple Sensors in a Cloud Environment
NASA Astrophysics Data System (ADS)
Bock, Y.; Todd, M.; Kuester, F.; Goldberg, D.; Lo, E.; Maher, R.
2017-12-01
A repeated near real time 3-D digital surrogate representation of critical engineered structures can be used to provide actionable data on subtle time-varying displacements in support of disaster resiliency. We describe a damage monitoring system of optimally-integrated complementary sensors, including Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical Systems (MEMS) accelerometers coupled with the GNSS (seismogeodesy), light multi-rotor Unmanned Aerial Vehicles (UAVs) equipped with high-resolution digital cameras and GNSS/IMU, and ground-based Light Detection and Ranging (LIDAR). The seismogeodetic system provides point measurements of static and dynamic displacements and seismic velocities of the structure. The GNSS ties the UAV and LIDAR imagery to an absolute reference frame with respect to survey stations in the vicinity of the structure to isolate the building response to ground motions. The GNSS/IMU can also estimate the trajectory of the UAV with respect to the absolute reference frame. With these constraints, multiple UAVs and LIDAR images can provide 4-D displacements of thousands of points on the structure. The UAV systematically circumnavigates the target structure, collecting high-resolution image data, while the ground LIDAR scans the structure from different perspectives to create a detailed baseline 3-D reference model. UAV- and LIDAR-based imaging can subsequently be repeated after extreme events, or after long time intervals, to assess before and after conditions. The unique challenge is that disaster environments are often highly dynamic, resulting in rapidly evolving, spatio-temporal data assets with the need for near real time access to the available data and the tools to translate these data into decisions. The seismogeodetic analysis has already been demonstrated in the NASA AIST Managed Cloud Environment (AMCE) designed to manage large NASA Earth Observation data projects on Amazon Web Services (AWS). The Cloud provides distinct advantages in terms of extensive storage and computing resources required for processing UAV and LIDAR imagery. Furthermore, it avoids single points of failure and allows for remote operations during emergencies, when near real time access to structures may be limited.
Real-Time and Retrospective Health-Analytics-as-a-Service: A Novel Framework.
Khazaei, Hamzeh; McGregor, Carolyn; Eklund, J Mikael; El-Khatib, Khalil
2015-11-18
Analytics-as-a-service (AaaS) is one of the latest provisions emerging from the cloud services family. Utilizing this paradigm of computing in health informatics will benefit patients, care providers, and governments significantly. This work is a novel approach to realize health analytics as services in critical care units in particular. To design, implement, evaluate, and deploy an extendable big-data compatible framework for health-analytics-as-a-service that offers both real-time and retrospective analysis. We present a novel framework that can realize health data analytics-as-a-service. The framework is flexible and configurable for different scenarios by utilizing the latest technologies and best practices for data acquisition, transformation, storage, analytics, knowledge extraction, and visualization. We have instantiated the proposed method, through the Artemis project, that is, a customization of the framework for live monitoring and retrospective research on premature babies and ill term infants in neonatal intensive care units (NICUs). We demonstrated the proposed framework in this paper for monitoring NICUs and refer to it as the Artemis-In-Cloud (Artemis-IC) project. A pilot of Artemis has been deployed in the SickKids hospital NICU. By infusing the output of this pilot set up to an analytical model, we predict important performance measures for the final deployment of Artemis-IC. This process can be carried out for other hospitals following the same steps with minimal effort. SickKids' NICU has 36 beds and can classify the patients generally into 5 different types including surgical and premature babies. The arrival rate is estimated as 4.5 patients per day, and the average length of stay was calculated as 16 days. Mean number of medical monitoring algorithms per patient is 9, which renders 311 live algorithms for the whole NICU running on the framework. The memory and computation power required for Artemis-IC to handle the SickKids NICU will be 32 GB and 16 CPU cores, respectively. The required amount of storage was estimated as 8.6 TB per year. There will always be 34.9 patients in SickKids NICU on average. Currently, 46% of patients cannot get admitted to SickKids NICU due to lack of resources. By increasing the capacity to 90 beds, all patients can be accommodated. For such a provisioning, Artemis-IC will need 16 TB of storage per year, 55 GB of memory, and 28 CPU cores. Our contributions in this work relate to a cloud architecture for the analysis of physiological data for clinical decisions support for tertiary care use. We demonstrate how to size the equipment needed in the cloud for that architecture based on a very realistic assessment of the patient characteristics and the associated clinical decision support algorithms that would be required to run for those patients. We show the principle of how this could be performed and furthermore that it can be replicated for any critical care setting within a tertiary institution.
Real-Time and Retrospective Health-Analytics-as-a-Service: A Novel Framework
McGregor, Carolyn; Eklund, J Mikael; El-Khatib, Khalil
2015-01-01
Background Analytics-as-a-service (AaaS) is one of the latest provisions emerging from the cloud services family. Utilizing this paradigm of computing in health informatics will benefit patients, care providers, and governments significantly. This work is a novel approach to realize health analytics as services in critical care units in particular. Objective To design, implement, evaluate, and deploy an extendable big-data compatible framework for health-analytics-as-a-service that offers both real-time and retrospective analysis. Methods We present a novel framework that can realize health data analytics-as-a-service. The framework is flexible and configurable for different scenarios by utilizing the latest technologies and best practices for data acquisition, transformation, storage, analytics, knowledge extraction, and visualization. We have instantiated the proposed method, through the Artemis project, that is, a customization of the framework for live monitoring and retrospective research on premature babies and ill term infants in neonatal intensive care units (NICUs). Results We demonstrated the proposed framework in this paper for monitoring NICUs and refer to it as the Artemis-In-Cloud (Artemis-IC) project. A pilot of Artemis has been deployed in the SickKids hospital NICU. By infusing the output of this pilot set up to an analytical model, we predict important performance measures for the final deployment of Artemis-IC. This process can be carried out for other hospitals following the same steps with minimal effort. SickKids’ NICU has 36 beds and can classify the patients generally into 5 different types including surgical and premature babies. The arrival rate is estimated as 4.5 patients per day, and the average length of stay was calculated as 16 days. Mean number of medical monitoring algorithms per patient is 9, which renders 311 live algorithms for the whole NICU running on the framework. The memory and computation power required for Artemis-IC to handle the SickKids NICU will be 32 GB and 16 CPU cores, respectively. The required amount of storage was estimated as 8.6 TB per year. There will always be 34.9 patients in SickKids NICU on average. Currently, 46% of patients cannot get admitted to SickKids NICU due to lack of resources. By increasing the capacity to 90 beds, all patients can be accommodated. For such a provisioning, Artemis-IC will need 16 TB of storage per year, 55 GB of memory, and 28 CPU cores. Conclusions Our contributions in this work relate to a cloud architecture for the analysis of physiological data for clinical decisions support for tertiary care use. We demonstrate how to size the equipment needed in the cloud for that architecture based on a very realistic assessment of the patient characteristics and the associated clinical decision support algorithms that would be required to run for those patients. We show the principle of how this could be performed and furthermore that it can be replicated for any critical care setting within a tertiary institution. PMID:26582268
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
NASA Astrophysics Data System (ADS)
Martin, C.; Dye, M. J.; Daniels, M. D.; Keiser, K.; Maskey, M.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.
2015-12-01
The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project tackles the challenges of collecting and disseminating geophysical observational data in real-time, especially for researchers with limited IT budgets and expertise. The CHORDS Portal is a component that allows research teams to easily configure and operate a cloud-based service which can receive data from dispersed instruments, manage a rolling archive of the observations, and serve these data to any client on the Internet. The research group (user) creates a CHORDS portal simply by running a prepackaged "CHORDS appliance" on Amazon Web Services. The user has complete ownership and management of the portal. Computing expenses are typically very small. RESTful protocols are employed for delivering and fetching data from the portal, which means that any system capable of sending an HTTP GET message is capable of accessing the portal. A simple API is defined, making it straightforward for non-experts to integrate a diverse collection of field instruments. Languages with network access libraries, such as Python, sh, Matlab, R, IDL, Ruby and JavaScript (and most others) can retrieve structured data from the portal with just a few lines of code. The user's private portal provides a browser-based system for configuring, managing and monitoring the health of the integrated real-time system. This talk will highlight the design goals, architecture and agile development of the CHORDS Portal. A running portal, with operational data feeds from across the country, will be presented.
NASA Technical Reports Server (NTRS)
Hussey, K. J.; Hall, J. R.; Mortensen, R. A.
1986-01-01
Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.
A platform for real-time online health analytics during spaceflight
NASA Astrophysics Data System (ADS)
McGregor, Carolyn
Monitoring the health and wellbeing of astronauts during spaceflight is an important aspect of any manned mission. To date the monitoring has been based on a sequential set of discontinuous samplings of physiological data to support initial studies on aspects such as weightlessness, and its impact on the cardiovascular system and to perform proactive monitoring for health status. The research performed and the real-time monitoring has been hampered by the lack of a platform to enable a more continuous approach to real-time monitoring. While any spaceflight is monitored heavily by Mission Control, an important requirement within the context of any spaceflight setting and in particular where there are extended periods with a lack of communication with Mission Control, is the ability for the mission to operate in an autonomous manner. This paper presents a platform to enable real-time astronaut monitoring for prognostics and health management within space medicine using online health analytics. The platform is based on extending previous online health analytics research known as the Artemis and Artemis Cloud platforms which have demonstrated their relevance for multi-patient, multi-diagnosis and multi-stream temporal analysis in real-time for clinical management and research within Neonatal Intensive Care. Artemis and Artemis Cloud source data from a range of medical devices capable of transmission of the signal via wired or wireless connectivity and hence are well suited to process real-time data acquired from astronauts. A key benefit of this platform is its ability to monitor their health and wellbeing onboard the mission as well as enabling the astronaut's physiological data, and other clinical data, to be sent to the platform components at Mission Control at each stage when that communication is available. As a result, researchers at Mission Control would be able to simulate, deploy and tailor predictive analytics and diagnostics during the same spaceflight for - reater medical support.
Validating Satellite-Retrieved Cloud Properties for Weather and Climate Applications
NASA Astrophysics Data System (ADS)
Minnis, P.; Bedka, K. M.; Smith, W., Jr.; Yost, C. R.; Bedka, S. T.; Palikonda, R.; Spangenberg, D.; Sun-Mack, S.; Trepte, Q.; Dong, X.; Xi, B.
2014-12-01
Cloud properties determined from satellite imager radiances are increasingly used in weather and climate applications, particularly in nowcasting, model assimilation and validation, trend monitoring, and precipitation and radiation analyses. The value of using the satellite-derived cloud parameters is determined by the accuracy of the particular parameter for a given set of conditions, such as viewing and illumination angles, surface background, and cloud type and structure. Because of the great variety of those conditions and of the sensors used to monitor clouds, determining the accuracy or uncertainties in the retrieved cloud parameters is a daunting task. Sensitivity studies of the retrieved parameters to the various inputs for a particular cloud type are helpful for understanding the errors associated with the retrieval algorithm relative to the plane-parallel world assumed in most of the model clouds that serve as the basis for the retrievals. Real world clouds, however, rarely fit the plane-parallel mold and generate radiances that likely produce much greater errors in the retrieved parameter than can be inferred from sensitivity analyses. Thus, independent, empirical methods are used to provide a more reliable uncertainty analysis. At NASA Langley, cloud properties are being retrieved from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for climate monitoring and model validation as part of the NASA CERES project since 2000 and from AVHRR data since 1978 as part of the NOAA CDR program. Cloud properties are also being retrieved in near-real time globally from both GEO and LEO satellites for weather model assimilation and nowcasting for hazards such as aircraft icing. This paper discusses the various independent datasets and approaches that are used to assessing the imager-based satellite cloud retrievals. These include, but are not limited to data from ARM sites, CloudSat, and CALIPSO. This paper discusses the use of the various datasets available, the methods employed to utilize them in the cloud property retrieval validation process, and the results and how they aid future development of the retrieval algorithms. Future needs are also discussed.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
A Regional Real-time Forecast of Marine Boundary Layers During VOCALS-REx
2011-01-01
Condensation Nuclei) concentration pro- motes more precipitation, leading to the destruction and structural change of the clouds (e.g., Stevens et al...and Muñoz, 2004), investigations of cloud and dynamic pro- cesses in case studies (Mocko and Cotton , 1995; Mechem and Kogan, 2003; Thompson et al...land breeze, Geophys. Res. Lett., 32, L05605, doi:10.1029/2004GL022139, 2005. Golaz, J.-C., Larson, V. E., and Cotton ,W. R: A PDF-based model for
Fujita, Hideo; Uchimura, Yuji; Waki, Kayo; Omae, Koji; Takeuchi, Ichiro; Ohe, Kazuhiko
2013-01-01
To improve emergency services for accurate diagnosis of cardiac emergency, we developed a low-cost new mobile electrocardiography system "Cloud Cardiology®" based upon cloud computing for prehospital diagnosis. This comprises a compact 12-lead ECG unit equipped with Bluetooth and Android Smartphone with an application for transmission. Cloud server enables us to share ECG simultaneously inside and outside the hospital. We evaluated the clinical effectiveness by conducting a clinical trial with historical comparison to evaluate this system in a rapid response car in the real emergency service settings. We found that this system has an ability to shorten the onset to balloon time of patients with acute myocardial infarction, resulting in better clinical outcome. Here we propose that cloud-computing based simultaneous data sharing could be powerful solution for emergency service for cardiology, along with its significant clinical outcome.
NASA Technical Reports Server (NTRS)
Wiscombe, W.
1999-01-01
The purpose of this paper is discuss the concept of fractal dimension; multifractal statistics as an extension of this; the use of simple multifractal statistics (power spectrum, structure function) to characterize cloud liquid water data; and to understand the use of multifractal cloud liquid water models based on real data as input to Monte Carlo radiation models of shortwave radiation transfer in 3D clouds, and the consequences of this in two areas: the design of aircraft field programs to measure cloud absorptance; and the explanation of the famous "Landsat scale break" in measured radiance.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Sea surface temperature measurements with AIRS
NASA Technical Reports Server (NTRS)
Aumann, H.
2003-01-01
The comparison of global sea surface skin temperature derived from cloud-free AIRS super window channel at 2616 cm-1 (sst2616) with the Real-Time Global Sea Surface Temperature for September 2002 shows surprisingly small standard deviation of 0.44K.
Automated cloud classification using a ground based infra-red camera and texture analysis techniques
NASA Astrophysics Data System (ADS)
Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.
2013-10-01
Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.
CATS Cloud-Aerosol Products and Near Real Time Capabilities
NASA Astrophysics Data System (ADS)
Nowottnick, E. P.; Yorks, J. E.; McGill, M. J.; Palm, S. P.; Hlavka, D. L.; Selmer, P. A.; Rodier, S. D.; Vaughan, M. A.
2016-12-01
The Cloud-Aerosol Transport System (CATS) is a backscatter lidar that is designed to demonstrate technologies in space for future Earth Science missions. CATS is located on the International Space Station (ISS), where it has been operating semi-continuously since February 2015. CATS provides observations of cloud and aerosol vertical profiles similar to CALIPSO, but with more comprehensive coverage of the tropics and mid-latitudes due to the ISS orbit properties. Additionally, the ISS orbit permits the study of diurnal variability of clouds and aerosols. CATS data has applications for identifying of cloud phase and aerosol types. Analysis of recent Level 2 data yield several biases in cloud and aerosol layer detection and identification, as well as retrievals of optical properties that will be improved for the next version to be released in late 2016. With data latency of less than 6 hours, CATS data is also being used for forecasting of volcanic plume transport, experimental data assimilation into aerosol transport models (GEOS-5, NAAPS), and field campaign flight planning (KORUS-AQ, ORACLES).
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
A Web service-based architecture for real-time hydrologic sensor networks
NASA Astrophysics Data System (ADS)
Wong, B. P.; Zhao, Y.; Kerkez, B.
2014-12-01
Recent advances in web services and cloud computing provide new means by which to process and respond to real-time data. This is particularly true of platforms built for the Internet of Things (IoT). These enterprise-scale platforms have been designed to exploit the IP-connectivity of sensors and actuators, providing a robust means by which to route real-time data feeds and respond to events of interest. While powerful and scalable, these platforms have yet to be adopted by the hydrologic community, where the value of real-time data impacts both scientists and decision makers. We discuss the use of one such IoT platform for the purpose of large-scale hydrologic measurements, showing how rapid deployment and ease-of-use allows scientists to focus on their experiment rather than software development. The platform is hardware agnostic, requiring only IP-connectivity of field devices to capture, store, process, and visualize data in real-time. We demonstrate the benefits of real-time data through a real-world use case by showing how our architecture enables the remote control of sensor nodes, thereby permitting the nodes to adaptively change sampling strategies to capture major hydrologic events of interest.
Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.
Friston, Sebastian; Steed, Anthony; Tilbury, Simon; Gaydadjiev, Georgi
2016-04-01
Latency - the delay between a user's action and the response to this action - is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space - but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of ~1 ms from 'tracker to pixel'. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of ~1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system - one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours.
NASA Astrophysics Data System (ADS)
Carazzo, G.; Jellinek, M.
2010-12-01
The prolonged disruption of global air travel as a result of the 2010 Eyjafjöll eruption in Iceland underscores the value of discerning the dynamics of volcanic ash-clouds in the atmosphere. Understanding the longevity of these clouds is a particularly long standing problem that bears not only on volcanic hazards to humans but also on the nature and time scale of volcanic forcings on climate change. Since early work on the subject, the common practice to tackle the problem of cloud longevity has been to account for the dynamics of sedimentation by individual particle settling. We use 1D modeling and analog experiments of a turbulent particle-laden umbrella cloud to show that this classical view can be misleading and that the residence times of these ash-clouds in the atmosphere depends strongly on the collective behavior of the solid fraction. Diffusive convection driven by the differential diffusion of constituents altering the cloud density (ash, temperature, sulfur dioxide) may enhance particle scavenging and extend the cloud longevity over time scales orders of magnitude longer than currently expected (i.e., years rather than days for powerful eruptions). Records of this behavior can be found in real-time measurements of stratospheric post-volcanic aerosols following the 1974 Fuego, the 1982 El Chichon, the 1991 Hudson and Pinatubo events, and more recently, from the 14 April 2010 Eyjafjöll eruption. The importance of diffusive convection in volcanic ash-clouds depends strongly on particle size distribution and concentration. For the 2010 Eyjafjöll eruption, we predict that particles larger than 10 microns should settle individually as commonly assumed, but particles smaller than 1 micron should diffuse slowly in layers extending the cloud longevity to several weeks rather than days. These predictions are found to be in good agreement with a number of satellite and ground-based lidar data on ash size and mass estimates performed at different locations across Europe.
Augmented Reality Comes to Physics
NASA Astrophysics Data System (ADS)
Buesing, Mark; Cook, Michael
2013-04-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as Tagwhat and Star Chart (a must for astronomy class). The yellow line marking first downs in a televised football game2 and the enhanced puck that makes televised hockey easier to follow3 both use augmented reality to do the job.
NASA Astrophysics Data System (ADS)
Eilers, J.
2013-09-01
The interface analysis from an observer of space objects makes a standard necessary. This standardized dataset serves as input for a cloud based service, which aimed for a near real-time Space Situational Awareness (SSA) system. The system contains all advantages of a cloud based solution, like redundancy, scalability and an easy way to distribute information. For the standard based on the interface analysis of the observer, the information can be separated in three parts. One part is the information about the observer e.g. a ground station. The next part is the information about the sensors that are used by the observer. And the last part is the data from the detected object. Backbone of the SSA System is the cloud based service which includes the consistency check for the observed objects, a database for the objects, the algorithms and analysis as well as the visualization of the results. This paper also provides an approximation of the needed computational power, data storage and a financial approach to deliver this service to a broad community. In this context cloud means, neither the user nor the observer has to think about the infrastructure of the calculation environment. The decision if the IT-infrastructure will be built by a conglomerate of different nations or rented on the marked should be based on an efficiency analysis. Also combinations are possible like starting on a rented cloud and then go to a private cloud owned by the government. One of the advantages of a cloud solution is the scalability. There are about 3000 satellites in space, 900 of them are active, and in total there are about ~17.000 detected space objects orbiting earth. But for the computation it is not a N(active) to N problem it is more N(active) to N(apo peri) quantity of N(all). Instead of 15.3 million possible collisions to calculate a computation of only approx. 2.3 million possible collisions must be done. In general, this Space Situational Awareness System can be used as a tool for satellite system owner for collision avoidance.
Real Time Urban Acoustics Using Commerical Technologies
2011-08-01
delays, and rendering for binaural or surround sound display [2]. VibeStudio does not include propagation effects of reflections, diffusion, or...available for rending both binaural headphones displays as well as standard and arbitrary surround sound formats. For this reason, minimal detail is...provided in this paper and the reader is referred to [2]. An image illustrating a binaural display scenario and a typical surround sound setup are
Transgenic Arabidopsis Gene Expression System
NASA Technical Reports Server (NTRS)
Ferl, Robert; Paul, Anna-Lisa
2009-01-01
The Transgenic Arabidopsis Gene Expression System (TAGES) investigation is one in a pair of investigations that use the Advanced Biological Research System (ABRS) facility. TAGES uses Arabidopsis thaliana, thale cress, with sensor promoter-reporter gene constructs that render the plants as biomonitors (an organism used to determine the quality of the surrounding environment) of their environment using real-time nondestructive Green Fluorescent Protein (GFP) imagery and traditional postflight analyses.
Color rendering indices in global illumination methods
NASA Astrophysics Data System (ADS)
Geisler-Moroder, David; Dür, Arne
2009-02-01
Human perception of material colors depends heavily on the nature of the light sources used for illumination. One and the same object can cause highly different color impressions when lit by a vapor lamp or by daylight, respectively. Based on state-of-the-art colorimetric methods we present a modern approach for calculating color rendering indices (CRI), which were defined by the International Commission on Illumination (CIE) to characterize color reproduction properties of illuminants. We update the standard CIE method in three main points: firstly, we use the CIELAB color space, secondly, we apply a Bradford transformation for chromatic adaptation, and finally, we evaluate color differences using the CIEDE2000 total color difference formula. Moreover, within a real-world scene, light incident on a measurement surface is composed of a direct and an indirect part. Neumann and Schanda1 have shown for the cube model that interreflections can influence the CRI of an illuminant. We analyze how color rendering indices vary in a real-world scene with mixed direct and indirect illumination and recommend the usage of a spectral rendering engine instead of an RGB based renderer for reasons of accuracy of CRI calculations.
NASA Astrophysics Data System (ADS)
Turpin, B. J.; Ramos, A.; Kirkland, J. R.; Lim, Y. B.; Seitzinger, S.
2011-12-01
There is considerable laboratory and field-based evidence that chemical processing in clouds and wet aerosols alters organic composition and contributes to the formation of secondary organic aerosol (SOA). Single-compound laboratory experiments have played an important role in developing aqueous-phase chemical mechanisms that aid prediction of SOA formation through multiphase chemistry. In this work we conduct similar experiments with cloud/fog water surrogates, to 1) evaluate to what extent the previously studied chemistry is observed in these more realistic atmospheric waters, and 2) to identify additional atmospherically-relevant precursors and products that require further study. We used filtered Camden and Pinelands, NJ rainwater as a surrogate for cloud water. OH radical (~10-12 M) was formed by photolysis of hydrogen peroxide and samples were analyzed in real-time by electrospray ionization mass spectroscopy (ESI-MS). Discrete samples were also analyzed by ion chromatography (IC) and ESI-MS after IC separation. All experiments were performed in duplicate. Standards of glyoxal, methylglyoxal and glycolaldehyde and their major aqueous oxidation products were also analyzed, and control experiments performed. Decreases in the ion abundance of many positive mode compounds and increases in the ion abundance of many negative mode compounds (e.g., organic acids) suggest that precursors are predominantly aldehydes, organic peroxides and/or alcohols. Real-time ESI mass spectra were consistent with the expected loss of methylglyoxal and subsequent formation of pyruvate, glyoxylate, and oxalate. New insights regarding other potential precursors and products will be provided.
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
Tracking Cloud Motion and Deformation for Short-Term Photovoltaic Power Forecasting
NASA Astrophysics Data System (ADS)
Good, Garrett; Siefert, Malte; Fritz, Rafael; Saint-Drenan, Yves-Marie; Dobschinski, Jan
2016-04-01
With the increasing role of photovoltaic power production, the need to accurately forecast and anticipate weather-driven elements like cloud cover has become ever more important. Of particular concern is forecasting on the short-term (up to several hours), for which the most recent full weather simulation may no longer provide the most accurate information in light of real-time satellite measurements. We discuss the application of the image correlation velocimetry technique described by Tokumaru & Dimotakis (1995) (for calculating flow fields from images) to measure deformations of various orders based on recent satellite imagery, with the goal of not only more accurately forecasting the advection of cloud structures, but their continued deformation as well.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
CATS Near Real Time Data Products: Applications for Assimilation Into the NASA GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Hlavka, D. L.; Nowottnick, E. P.; Yorks, J. E.; Da Silva, A.; McGill, M. J.; Palm, S. P.; Selmer, P. A.; Pauly, R. M.; Ozog, S.
2017-01-01
From February 2015 through October 2017, the NASA Cloud-Aerosol Transport System (CATS) backscatter lidar operated on the International Space Station (ISS) as a technology demonstration for future Earth Science Missions, providing vertical measurements of cloud and aerosols properties. Owing to its location on the ISS, a cornerstone technology demonstration of CATS was the capability to acquire, process, and disseminate near-real time (NRT) data within 6 hours of observation time. CATS NRT data has several applications, including providing notification of hazardous events for air traffic control and air quality advisories, field campaign flight planning, as well as for constraining cloud and aerosol distributions in via data assimilation in aerosol transport models. Â Recent developments in aerosol data assimilation techniques have permitted the assimilation of aerosol optical thickness (AOT), a 2-dimensional column integrated quantity that is reflective of the simulated aerosol loading in aerosol transport models. While this capability has greatly improved simulated AOT forecasts, the vertical position, a key control on aerosol transport, is often not impacted when 2-D AOT is assimilated. Here, we present preliminary efforts to assimilate CATS aerosol observations into the NASA Goddard Earth Observing System version 5 (GEOS-5) atmospheric general circulation model and assimilation system using a 1-D Variational (1-D VAR) ensemble approach, demonstrating the utility of CATS for future Earth Science Missions.
A GPU-based mipmapping method for water surface visualization
NASA Astrophysics Data System (ADS)
Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan
2018-03-01
Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.
An Augmented Reality Nanomanipulator for Learning Nanophysics: The "NanoLearner" Platform
NASA Astrophysics Data System (ADS)
Marchi, Florence; Marliere, Sylvain; Florens, Jean Loup; Luciani, Annie; Chevrier, Joel
The work focuses on the description and evaluation of an augmented reality nanomanipulator, called "NanoLearner" platform used as educational tool in practical works of nanophysics. Through virtual reality associated to multisensory renderings, students are immersed in the nanoworld where they can interact in real time with a sample surface or an object, using their senses as hearing, seeing and touching. The role of each sensorial rendering in the understanding and control of the "approach-retract" interaction has been determined thanks to statistical studies obtained during the practical works. Finally, we present two extensions of the use of this innovative tool for investigating nano effects in living organisms and for allowing grand public to have access to a natural understanding of nanophenomena.
Capturing and analyzing wheelchair maneuvering patterns with mobile cloud computing.
Fu, Jicheng; Hao, Wei; White, Travis; Yan, Yuqing; Jones, Maria; Jan, Yih-Kuen
2013-01-01
Power wheelchairs have been widely used to provide independent mobility to people with disabilities. Despite great advancements in power wheelchair technology, research shows that wheelchair related accidents occur frequently. To ensure safe maneuverability, capturing wheelchair maneuvering patterns is fundamental to enable other research, such as safe robotic assistance for wheelchair users. In this study, we propose to record, store, and analyze wheelchair maneuvering data by means of mobile cloud computing. Specifically, the accelerometer and gyroscope sensors in smart phones are used to record wheelchair maneuvering data in real-time. Then, the recorded data are periodically transmitted to the cloud for storage and analysis. The analyzed results are then made available to various types of users, such as mobile phone users, traditional desktop users, etc. The combination of mobile computing and cloud computing leverages the advantages of both techniques and extends the smart phone's capabilities of computing and data storage via the Internet. We performed a case study to implement the mobile cloud computing framework using Android smart phones and Google App Engine, a popular cloud computing platform. Experimental results demonstrated the feasibility of the proposed mobile cloud computing framework.
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
Architecture for high performance stereoscopic game rendering on Android
NASA Astrophysics Data System (ADS)
Flack, Julien; Sanderson, Hugh; Shetty, Sampath
2014-03-01
Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.
State of the "art": a taxonomy of artistic stylization techniques for images and video.
Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias
2013-05-01
This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.
Estimating Water Levels with Google Earth Engine
NASA Astrophysics Data System (ADS)
Lucero, E.; Russo, T. A.; Zentner, M.; May, J.; Nguy-Robertson, A. L.
2016-12-01
Reservoirs serve multiple functions and are vital for storage, electricity generation, and flood control. For many areas, traditional ground-based reservoir measurements may not be available or data dissemination may be problematic. Consistent monitoring of reservoir levels in data-poor areas can be achieved through remote sensing, providing information to researchers and the international community. Estimates of trends and relative reservoir volume can be used to identify water supply vulnerability, anticipate low power generation, and predict flood risk. Image processing with automated cloud computing provides opportunities to study multiple geographic areas in near real-time. We demonstrate the prediction capability of a cloud environment for identifying water trends at reservoirs in the US, and then apply the method to data-poor areas in North Korea, Iran, Azerbaijan, Zambia, and India. The Google Earth Engine cloud platform hosts remote sensing data and can be used to automate reservoir level estimation with multispectral imagery. We combine automated cloud-based analysis from Landsat image classification to identify reservoir surface area trends and radar altimetry to identify reservoir level trends. The study estimates water level trends using three years of data from four domestic reservoirs to validate the remote sensing method, and five foreign reservoirs to demonstrate the method application. We report correlations between ground-based reservoir level measurements in the US and our remote sensing methods, and correlations between the cloud analysis and altimetry data for reservoirs in data-poor areas. The availability of regular satellite imagery and an automated, near real-time application method provides the necessary datasets for further temporal analysis, reservoir modeling, and flood forecasting. All statements of fact, analysis, or opinion are those of the author and do not reflect the official policy or position of the Department of Defense or any of its components or the U.S. Government
Realistic natural atmospheric phenomena and weather effects for interactive virtual environments
NASA Astrophysics Data System (ADS)
McLoughlin, Leigh
Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
Tangible display systems: bringing virtual surfaces into the real world
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2012-03-01
We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.
Spectral Cloud-Filtering of AIRS Data: Non-Polar Ocean
NASA Technical Reports Server (NTRS)
Aumann, Hartmut H.; Gregorich, David; Barron, Diana
2004-01-01
The Atmospheric Infrared Sounder (AIRS) is a grating array spectrometer which covers the thermal infrared spectral range between 640 and 1700/cm. In order to retain the maximum radiometric accuracy of the AIRS data, the effects of cloud contamination have to be minimized. We discuss cloud filtering which uses the high spectral resolution of AIRS to identify about 100,000 of 500,000 non-polar ocean spectra per day as relatively "cloud-free". Based on the comparison of surface channels with the NCEP provided global real time sst (rtg.sst), AIRS surface sensitive channels have a cold bias ranging from O.5K during the day to 0.8K during the night. Day and night spatial coherence tests show that the cold bias is due to cloud contamination. During the day the cloud contamination is due to a 2-3% broken cloud cover at the 1-2 km altitude, characteristic of low stratus clouds. The cloud-contamination effects surface sensitive channels only. Cloud contamination can be reduced to 0.2K by combining the spectral filter with a spatial coherence threshold, but the yield drops to 16,000 spectra per day. AIRS was launched in May 2002 on the Earth Observing System (EOS) Aqua satellite. Since September 2002 it has returned 4 million spectra of the globe each day.
Applications of ISES for meteorology
NASA Technical Reports Server (NTRS)
Try, Paul D.
1990-01-01
The results are summarized from an initial assessment of the potential real-time meteorological requirements for the data from Eos systems. Eos research scientists associated with facility instruments, investigator instruments, and interdisciplinary groups with data related to meteorological support were contacted, along with those from the normal operational user and technique development groups. Two types of activities indicated the greatest need for real-time Eos data: technology transfer groups (e.g., NOAA's Forecasting System Laboratory and the DOD development laboratories), and field testing groups with airborne operations. A special concern was expressed by several non-U.S. participants who desire a direct downlink to be sure of rapid receipt of the data for their area of interest. Several potential experiments or demonstrations are recommended for ISES which include support for hurricane/typhoon forecasting, space shuttle reentry, severe weather forecasting (using microphysical cloud classification techniques), field testing, and quick reaction of instrumented aircraft to measure such events as polar stratospheric clouds and volcanic eruptions.
Real-time full-motion color Flash lidar for target detection and identification
NASA Astrophysics Data System (ADS)
Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt
2015-05-01
Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.
Real-Time View Correction for Mobile Devices.
Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc
2017-11-01
We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.
Virtual probing system for medical volume data
NASA Astrophysics Data System (ADS)
Xiao, Yongfei; Fu, Yili; Wang, Shuguo
2007-12-01
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.
An image-space parallel convolution filtering algorithm based on shadow map
NASA Astrophysics Data System (ADS)
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Implementation of a gust front head collapse scheme in the WRF numerical model
NASA Astrophysics Data System (ADS)
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2018-05-01
Gust fronts are thunderstorm-related phenomena usually associated with severe winds which are of great importance in theoretical meteorology, weather forecasting, cloud dynamics and precipitation, and wind engineering. An important feature of gust fronts demonstrated through both theoretical and observational studies is the periodic collapse and rebuild of the gust front head. This cyclic behavior of gust fronts results in periodic forcing of vertical velocity ahead of the parent thunderstorm, which consequently influences the storm dynamics and microphysics. This paper introduces the first gust front pulsation parameterization scheme in the WRF-ARW model (Weather Research and Forecasting-Advanced Research WRF). The influence of this new scheme on model performances is tested through investigation of the characteristics of an idealized supercell cumulonimbus cloud, as well as studying a real case of thunderstorms above the United Arab Emirates. In the ideal case, WRF with the gust front scheme produced more precipitation and showed different time evolution of mixing ratios of cloud water and rain, whereas the mixing ratios of ice and graupel are almost unchanged when compared to the default WRF run without the parameterization of gust front pulsation. The included parameterization did not disturb the general characteristics of thunderstorm cloud, such as the location of updraft and downdrafts, and the overall shape of the cloud. New cloud cells in front of the parent thunderstorm are also evident in both ideal and real cases due to the included forcing of vertical velocity caused by the periodic collapse of the gust front head. Despite some differences between the two WRF simulations and satellite observations, the inclusion of the gust front parameterization scheme produced more cumuliform clouds and seem to match better with real observations. Both WRF simulations gave poor results when it comes to matching the maximum composite radar reflectivity from radar measurement. Similar to the ideal case, WRF model with the gust front scheme gave more precipitation than the default WRF run. In particular, the gust front scheme increased the area characterized with light precipitation and diminished the development of very localized and intense precipitation.
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
Speeding Up Geophysical Research Using Docker Containers Within Multi-Cloud Environment.
NASA Astrophysics Data System (ADS)
Synytsky, R.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.; Starovoit, Y. O.
2016-12-01
How useful are the geophysical observations in a scope of minimizing losses from natural disasters today? Does it help to decrease number of human victims during tsunami and earthquake? Unfortunately it's still at early stage these days. It's a big goal and achievement to make such observations more useful by improving early warning and prediction systems with the help of cloud computing. Cloud computing technologies have proved the ability to speed up application development in many areas for 10 years already. Cloud unlocks new opportunities for geoscientists by providing access to modern data processing tools and algorithms including real-time high-performance computing, big data processing, artificial intelligence and others. Emerging lightweight cloud technologies, such as Docker containers, are gaining wide traction in IT due to the fact of faster and more efficient deployment of different applications in a cloud environment. It allows to deploy and manage geophysical applications and systems in minutes across multiple clouds and data centers that becomes of utmost importance for the next generation applications. In this session we'll demonstrate how Docker containers technology within multi-cloud can accelerate the development of applications specifically designed for geophysical researches.
Verifying Air Force Weather Passive Satellite Derived Cloud Analysis Products
NASA Astrophysics Data System (ADS)
Nobis, T. E.
2017-12-01
Air Force Weather (AFW) has developed an hourly World-Wide Merged Cloud Analysis (WWMCA) using imager data from 16 geostationary and polar-orbiting satellites. The analysis product contains information on cloud fraction, height, type and various optical properties including optical depth and integrated water path. All of these products are derived using a suite of algorithms which rely exclusively on passively sensed data from short, mid and long wave imager data. The system integrates satellites with a wide-range of capabilities, from the relatively simple two-channel OLS imager to the 16 channel ABI/AHI to create a seamless global analysis in real time. Over the last couple of years, AFW has started utilizing independent verification data from active sensed cloud measurements to better understand the performance limitations of the WWMCA. Sources utilized include space based lidars (CALIPSO, CATS) and radar (CloudSat) as well as ground based lidars from the Department of Energy ARM sites and several European cloud radars. This work will present findings from our efforts to compare active and passive sensed cloud information including comparison techniques/limitations as well as performance of the passive derived cloud information against the active.
Farming the Tropics: Visualizing Landscape Changes Through the Clouds, in the Cloud
NASA Astrophysics Data System (ADS)
Kontgis, C.; Brumby, S. P.; Chartrand, R.; Franco, E.; Keisler, R.; Kelton, T.; Mathis, M.; Moody, D.; Raleigh, D.; Rudelis, X.; Skillman, S.; Warren, M. S.
2016-12-01
A key component of studying land cover and land use change is analyzing trends in spectral signatures through time. For vegetation, the standard method of doing this involves the normalized difference vegetation index (NDVI) or near infrared signal during a growing season, as both increase while plants grow and decrease during senescence. If temporal resolution were high and clouds did not obstruct landscape views, this approach could work across the globe. However, in tropical regions that are increasingly important for global food production, often there is not enough spectral information to monitor landscape change due to persistent cloud cover. In these instances, synthetic aperture radar (SAR) data provides a useful alternative to shorter wavelength components of the spectrum since its longer wavelengths can penetrate clouds. This analysis uses the cloud-based platform developed by Descartes Labs to explore the utility of Sentinel-1 data in cloudy tropical regions, using the Mekong River Delta in southern Vietnam as a case study. We compare phenological growing patterns derived from Sentinel-1 data with those from Landsat and MODIS imagery, which are the most commonly used sensors to map land cover and land use across the globe. Using these SAR-derived phenology curves, it is possible to monitor landscape changes in near real-time, while also visualizing and quantifying the rates of agricultural intensification. Descartes Labs is a venture-backed remote sensing startup founded in 2014 by a group of scientists from the Los Alamos National Laboratory in New Mexico. Since its inception, the team at Descartes has assembled all available satellite imagery from the USGS Landsat and NASA MODIS programs, and has analyzed over 2.8 quadrillion pixels of satellite imagery. With a focus on food security and climate change, the company has succeeded at estimating United States corn yields earlier and more accurately than USDA estimates. Now, this technology is being applied to within-season forecasting of acreage and yields in near real-time, while also branching out beyond the US to other regions including South America and Asia.
Epilepsy analytic system with cloud computing.
Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei
2013-01-01
Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.
Providing Access and Visualization to Global Cloud Properties from GEO Satellites
NASA Astrophysics Data System (ADS)
Chee, T.; Nguyen, L.; Minnis, P.; Spangenberg, D.; Palikonda, R.; Ayers, J. K.
2015-12-01
Providing public access to cloud macro and microphysical properties is a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a tool and method that allows end users to easily browse and access cloud information that is otherwise difficult to acquire and manipulate. The core of the tool is an application-programming interface that is made available to the public. One goal of the tool is to provide a demonstration to end users so that they can use the dynamically generated imagery as an input into their own work flows for both image generation and cloud product requisition. This project builds upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product imagery accessible and easily searchable. As we see the increasing use of virtual supply chains that provide additional value at each link there is value in making satellite derived cloud product information available through a simple access method as well as allowing users to browse and view that imagery as they need rather than in a manner most convenient for the data provider. Using the Open Geospatial Consortium's Web Processing Service as our access method, we describe a system that uses a hybrid local and cloud based parallel processing system that can return both satellite imagery and cloud product imagery as well as the binary data used to generate them in multiple formats. The images and cloud products are sourced from multiple satellites and also "merged" datasets created by temporally and spatially matching satellite sensors. Finally, the tool and API allow users to access information that spans the time ranges that our group has information available. In the case of satellite imagery, the temporal range can span the entire lifetime of the sensor.
Fast DRR generation for 2D to 3D registration on GPUs.
Tornai, Gábor János; Cserey, György; Pappas, Ion
2012-08-01
The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.
Real-time dual-band haptic music player for mobile devices.
Hwang, Inwook; Lee, Hyeseon; Choi, Seungmoon
2013-01-01
We introduce a novel dual-band haptic music player for real-time simultaneous vibrotactile playback with music in mobile devices. Our haptic music player features a new miniature dual-mode actuator that can produce vibrations consisting of two principal frequencies and a real-time vibration generation algorithm that can extract vibration commands from a music file for dual-band playback (bass and treble). The algorithm uses a "haptic equalizer" and provides plausible sound-to-touch modality conversion based on human perceptual data. In addition, we present a user study carried out to evaluate the subjective performance (precision, harmony, fun, and preference) of the haptic music player, in comparison with the current practice of bass-band-only vibrotactile playback via a single-frequency voice-coil actuator. The evaluation results indicated that the new dual-band playback outperforms the bass-only rendering, also providing several insights for further improvements. The developed system and experimental findings have implications for improving the multimedia experience with mobile devices.
NAIMA as a solution for future GMO diagnostics challenges.
Dobnik, David; Morisset, Dany; Gruden, Kristina
2010-03-01
In the field of genetically modified organism (GMO) diagnostics, real-time PCR has been the method of choice for target detection and quantification in most laboratories. Despite its numerous advantages, however, the lack of a true multiplexing option may render real-time PCR less practical in the face of future GMO detection challenges such as the multiplicity and increasing complexity of new transgenic events, as well as the repeated occurrence of unauthorized GMOs on the market. In this context, we recently reported the development of a novel multiplex quantitative DNA-based target amplification method, named NASBA implemented microarray analysis (NAIMA), which is suitable for sensitive, specific and quantitative detection of GMOs on a microarray. In this article, the performance of NAIMA is compared with that of real-time PCR, the focus being their performances in view of the upcoming challenge to detect/quantify an increasing number of possible GMOs at a sustainable cost and affordable staff effort. Finally, we present our conclusions concerning the applicability of NAIMA for future use in GMO diagnostics.
Intelligent Multi-Media Presentation Using Rhetorical Structure Theory
2015-01-01
information repeatedly, on demand, and without imposing an additional manning burden. Virtual Advisers can be delivered in several ways: as a...up text which identifies what content is to be said in addition to how that content is to be emotionally expressed. </say> <say> Using real-time...development of new rendering engines. These toolkits provide additional common underlying functionality such as: pluggable audio (via OpenAL4/JOAL5
Development and Evaluation of Sterographic Display for Lung Cancer Screening
2008-12-01
burden. Application of GPUs – With the evolution of commodity graphics processing units (GPUs) for accelerating games on personal computers, over the...units, which are designed for rendering computer games , are readily available and can be programmed to perform the kinds of real-time calculations...575-581, 1994. 12. Anderson CM, Saloner D, Tsuruda JS, Shapeero LG, Lee RE. "Artifacts in maximun-intensity-projection display of MR angiograms
Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2
NASA Astrophysics Data System (ADS)
Makar, Robert J.; O'Toole, Brian E.
1998-07-01
An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.
NASA Technical Reports Server (NTRS)
Susskind, Joel
2008-01-01
AIRS/AMSU is the advanced IR/MW atmospheric sounding system launched on EOS Aqua in May 2002. Products derived from AIRS/AMSU include surface skin temperature and atmospheric temperature profiles; atmospheric humidity profiles, percent cloud cover and cloud top pressure, and OLR. Near real time products, stating with September 2002, have been derived from AIRS/AMSU using the AIRS Science Team Version 5 retrieval algorithm. Results in this paper included products through April 2008. The time period studied is marked by a substantial warming trend of Northern Hemisphere Extropical land surface skin temperatures, as well as pronounced El Nino - La Nina episodes. These both influence the spatial and temporal anomaly patterns of atmospheric temperature and moisture profiles, as well as of cloud cover and Clear Sky and All Sky OLR The relationships between temporal and spatial anomalies of these parameters over this time period, as determined from AIRS/AMSU observations, are shown below, with particular emphasis on which contribute significantly to OLR anomalies in each of the tropics and extra-tropics. The ability to match this data represents a good test of a model's response to El Nino.
A service brokering and recommendation mechanism for better selecting cloud services.
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.
Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry
2012-12-01
Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.
NASA Astrophysics Data System (ADS)
Lindsey, Brooks D.; Ivancevich, Nikolas M.; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A.; Laskowitz, Daniel T.; Smith, Stephen W.
2009-02-01
We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time 3D scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64° pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128° sector, two simultaneous parasagittal images merged into a 128° × 64° C-mode plane, and a simultaneous 64° axial image. Real-time 3D color Doppler images acquired in initial clinical studies after contrast injection demonstrate flow in several representative blood vessels. An offline Doppler rendering of data from two transducers simultaneously scanning via the temporal windows provides an early visualization of the flow in vessels on both sides of the brain. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission, thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
NASA Technical Reports Server (NTRS)
Uthe, Edward E.
1990-01-01
SRI has assembled an airborne lidar/radiometric instrumentation suite for mapping cirrus cloud distribution and analyzing cirrus cloud optical properties. Operation of upward viewing infrared radiometers from an airborne platform provides the optimum method of measuring high altitude cold cloud radiative properties with minimum interference from the thermal emission by the earth's surface and lower atmospheric components. Airborne installed sensors can also operate over large regional areas including water, urban, and mountain surfaces and above lower atmospheric convective clouds and haze layers. Currently available sensors installed on the SRI Queen Air aircraft are illustrated. Lidar and radiometric data records are processed for real time viewing on a color video screen. A cirrus cloud data example is presented as a black and white reproduction of a color display of data at the aircraft altitude of 12,000 ft, the 8 to 14 micron atmospheric radiation background was equivalent to a blackbody temperature of about -60 C and, therefore, the radiometer did not respond strongly to low density cirrus cloud concentrations detected by the lidar. Cloud blackbody temperatures (observed by radiometer) are shown plotted against midcloud temperatures (derived from lidar observed cloud heights and supporting temperature profiles) for data collected on 30 June and 28 July.
DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.
2017-12-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
Coaxial digital holography measures particular matter in cloud and ambient atmosphere
NASA Astrophysics Data System (ADS)
Li, Baosheng; Yu, Haonan; Jia, Yizhen; Tao, Xiaojie; Zhang, Yang
2018-02-01
In the artificially affected weather, the detection of cloud droplets particles provides an important reference for the effective impact of artificial weather. Digital holography has the unique advantages of full-field, non-contact, no damage, real-time and quantification. In this paper, coaxial digital holography is used to record the polyethylene standard particles and aluminum scrap, and some important parameters, such as three-dimensional coordinate spatial distribution and particle size, will be obtained by the means of analyzing the digital hologram of the particle. The experimental results verify the feasibility of the coaxial digital holographic device applied to the measurement of the cloud parameters, and complete the construction of the coaxial digital holographic system and the measurement of the particles.
Color-rendering indices in global illumination methods
NASA Astrophysics Data System (ADS)
Geisler-Moroder, David; Dür, Arne
2009-10-01
Human perception of material colors depends heavily on the nature of the light sources that are used for illumination. One and the same object can cause highly different color impressions when lit by a vapor lamp or by daylight, respectively. On the basis of state-of-the-art colorimetric methods, we present a modern approach for the calculation of color-rendering indices (CRI), which were defined by the International Commission on Illumination (CIE) to characterize color reproduction properties of illuminants. We update the standard CIE method in three main points: first, we use the CIELAB color space; second, we apply a linearized Bradford transformation for chromatic adaptation; and finally, we evaluate color differences using the CIEDE2000 total color difference formula. Moreover, within a real-world scene, light incident on a measurement surface is composed of a direct and an indirect part. Neumann and Schanda [Proc. CGIV'06 Conf., Leeds, UK, pp. 283-286 (2006)] have shown for the cube model that diffuse interreflections can influence the CRI of a light source. We analyze how color-rendering indices vary in a real-world scene with mixed direct and indirect illumination and recommend the usage of a spectral rendering engine instead of an RGB-based renderer for reasons of accuracy of CRI calculations.
A client–server framework for 3D remote visualization of radiotherapy treatment space
Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.
2013-01-01
Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605
Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Fisher, W.; Yoksas, T.
2014-12-01
Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high student expectations. These changes are upending traditional approaches to accessing and using data and software. It is clear that Unidata's products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our initial efforts to deploy a subset of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.
NASA Astrophysics Data System (ADS)
Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.
2005-03-01
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
Real-Time Cloud, Radiation, and Aircraft Icing Parameters from GOES over the USA
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Nguyen, Louis; Smith, William, Jr.; Young, David; Khaiyer, Mandana; Palikonda, Rabindra; Spangenberg, Douglas; Doelling, Dave; Phan, Dung; Nowicki, Greg
2004-01-01
A preliminary new, physically based method for realtime estimation of the probability of icing conditions has been demonstrated using merged GOES-10 and 12 data over the continental United States and southern Canada. The algorithm produces pixel-level cloud and radiation properties as well as an estimate of icing probability with an associated intensity rating Because icing depends on so many different variables, such as aircraft size or air speed, it is not possible to achieve 100% success with this or any other type of approach. This initial algorithm, however, shows great promise for diagnosing aircraft icing and putting it at the correct altitude within 0.5 km most of the time. Much additional research must be completed before it can serve as a reliable input for the operational CIP. The delineation of the icing layer vertical boundaries will need to be improved using either the RUC or balloon soundings or ceilometer data to adjust the cloud base height, a change that would require adjustment of the cloud-top altitude also.
Nordahl, Rolf; Turchet, Luca; Serafin, Stefania
2011-09-01
We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.
An application of the MPP to the interactive manipulation of stereo images of digital terrain models
NASA Technical Reports Server (NTRS)
Pol, Sanjay; Mcallister, David; Davis, Edward
1987-01-01
Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.
A new approach to subjectively assess quality of plenoptic content
NASA Astrophysics Data System (ADS)
Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj
2016-09-01
Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.
NASA Astrophysics Data System (ADS)
Bertin, Clément; Cros, Sylvain; Saint-Antonin, Laurent; Schmutz, Nicolas
2015-10-01
The growing demand for high-speed broadband communications with low orbital or geostationary satellites is a major challenge. Using an optical link at 1.55 μm is an advantageous solution which potentially can increase the satellite throughput by a factor 10. Nevertheless, cloud cover is an obstacle for this optical frequency. Such communication requires an innovative management system to optimize the optical link availability between a satellite and several Optical Ground Stations (OGS). The Saint-Exupery Technological Research Institute (France) leads the project ALBS (French acronym for BroadBand Satellite Access). This initiative involving small and medium enterprises, industrial groups and research institutions specialized in aeronautics and space industries, is currently developing various solutions to increase the telecommunication satellite bandwidth. This paper presents the development of a preliminary prediction system preventing the cloud blockage of an optical link between a satellite and a given OGS. An infrared thermal camera continuously observes (night and day) the sky vault. Cloud patterns are observed and classified several times a minute. The impact of the detected clouds on the optical beam (obstruction or not) is determined by the retrieval of the cloud optical depth at the wavelength of communication. This retrieval is based on realistic cloud-modelling on libRadtran. Then, using subsequent images, cloud speed and trajectory are estimated. Cloud blockage over an OGS can then be forecast up to 30 minutes ahead. With this information, the preparation of the new link between the satellite and another OGS under a clear sky can be prepared before the link breaks due to cloud blockage.
Providing Diurnal Sky Cover Data at ARM Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klebe, Dimitri I.
2015-03-06
The Solmirus Corporation was awarded two-year funding to perform a comprehensive data analysis of observations made during Solmirus’ 2009 field campaign (conducted from May 21 to July 27, 2009 at the ARM SGP site) using their All Sky Infrared Visible Analyzer (ASIVA) instrument. The objective was to develop a suite of cloud property data products for the ASIVA instrument that could be implemented in real time and tailored for cloud modelers. This final report describes Solmirus’ research and findings enabled by this grant. The primary objective of this award was to develop a diurnal sky cover (SC) data product utilizingmore » the ASIVA’s infrared (IR) radiometrically-calibrated data and is described in detail. Other data products discussed in this report include the sky cover derived from ASIVA’s visible channel and precipitable water vapor, cloud temperature (both brightness and color), and cloud height inferred from ASIVA’s IR channels.« less
Helmet-Mounted Display Of Clouds Of Harmful Gases
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Barengoltz, Jack B.; Schober, Wayne R.
1995-01-01
Proposed helmet-mounted opto-electronic instrument provides real-time stereoscopic views of clouds of otherwise invisible toxic, explosive, and/or corrosive gas. Display semitransparent: images of clouds superimposed on scene ordinarily visible to wearer. Images give indications on sizes and concentrations of gas clouds and their locations in relation to other objects in scene. Instruments serve as safety devices for astronauts, emergency response crews, fire fighters, people cleaning up chemical spills, or anyone working near invisible hazardous gases. Similar instruments used as sensors in automated emergency response systems that activate safety equipment and emergency procedures. Both helmet-mounted and automated-sensor versions used at industrial sites, chemical plants, or anywhere dangerous and invisible or difficult-to-see gases present. In addition to helmet-mounted and automated-sensor versions, there could be hand-held version. In some industrial applications, desirable to mount instruments and use them similarly to parking-lot surveillance cameras.
The role of clouds and oceans in global greenhouse warming. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffert, M.I.
1996-10-01
This research focuses on assessing connections between anthropogenic greenhouse gas emissions and global climatic change. it has been supported since the early 1990s in part by the DOE ``Quantitative Links`` Program (QLP). A three-year effort was originally proposed to the QLP to investigate effects f global cloudiness on global climate and its implications for cloud feedback; and to continue the development and application of climate/ocean models, with emphasis on coupled effects of greenhouse warming and feedbacks by clouds and oceans. It is well-known that cloud and ocean processes are major sources of uncertainty in the ability to predict climatic changemore » from humankind`s greenhouse gas and aerosol emissions. And it has always been the objective to develop timely and useful analytical tools for addressing real world policy issues stemming from anthropogenic climate change.« less
a Fast and Flexible Method for Meta-Map Building for Icp Based Slam
NASA Astrophysics Data System (ADS)
Kurian, A.; Morin, K. W.
2016-06-01
Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.
Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management
2016-11-16
order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and
A New Unsteady Model for Dense Cloud Cavitation in Cryogenic Fluids
NASA Technical Reports Server (NTRS)
Hosangadi, A.; Ahuja, V.
2005-01-01
A new unsteady, cavitation model is presented wherein the phase change process (bubble growth/collapse) is coupled to the acoustic field in a cryogenic fluid. It predicts the number density and radius of bubbles in vapor clouds by tracking both the aggregate surface area and volume fraction of the cloud. Hence, formulations for the dynamics of individual bubbles (e.g. Rayleigh-Plesset equation) may be integrated within the macroscopic context of a dense vapor cloud i.e. a cloud that occupies a significant fraction of available volume and contains numerous bubbles. This formulation has been implemented within the CRUNCH CFD, which has a compressible real fluid formulation, a multi-element, unstructured grid framework, and has been validated extensively for liquid rocket turbopump inducers. Detailed unsteady simulations of a cavitating ogive in liquid nitrogen are presented where time-averaged mean cavity pressure and temperature depressions due to cavitation are compared with experimental data. The model also provides the spatial and temporal history of the bubble size distribution in the vapor clouds that are shed, an important physical parameter that is difficult to measure experimentally and is a significant advancement in the modeling of dense cloud cavitation.
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the northeast, from between the cloud layers and above the streaks in the lower cloud leading towards the hotspot. The upper haze layer has some features that match the lower cloud, such as the bright streak in the foreground of the frame. These are probably thick clouds that span several tens of vertical kilometers.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the northeast, from between the cloud layers and above the streaks in the lower cloud leading towards the hotspot. The hotspot is clearly visible as a deep blue feature. The cloud streaks end near the hotspot, consistent with the idea that clouds traveling along these streak lines descend and evaporate as they approach the hotspot. The upper haze layer is slightly bowed upwards above the hotspot.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.NASA Astrophysics Data System (ADS)
Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.
2012-04-01
PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.
Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert
2009-01-01
The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Real-time simulation of the nonlinear visco-elastic deformations of soft tissues.
Basafa, Ehsan; Farahmand, Farzam
2011-05-01
Mass-spring-damper (MSD) models are often used for real-time surgery simulation due to their fast response and fairly realistic deformation replication. An improved real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was developed and tested. The mechanical realization of conventional MSD models was improved using nonlinear springs and nodal dampers, while their high computational efficiency was maintained using an adapted implicit integration algorithm. New practical algorithms for model parameter tuning, collision detection, and simulation were incorporated. The model was able to replicate complex biological soft tissue mechanical properties under large deformations, i.e., the nonlinear and viscoelastic behaviors. The simulated response of the model after tuning of its parameters to the experimental data of a deer liver sample, closely tracked the reference data with high correlation and maximum relative differences of less than 5 and 10%, for the tuning and testing data sets respectively. Finally, implementation of the proposed model and algorithms in a graphical environment resulted in a real-time simulation with update rates of 150 Hz for interactive deformation and haptic manipulation, and 30 Hz for visual rendering. The proposed real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was efficient, realistic, and accurate in ex vivo testing. This model is a suitable candidate for testing in vivo during laparoscopic surgery.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.
2017-12-01
UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data from a network of 700-plus GPS stations. The evaluation is based on a suite of metrics that we have developed to elucidate the effectiveness of cloud-based services in price, performance, and management. Services are currently running in AWS and evaluation is underway.
Characterizing the Relationships Among Lightning and Storm Parameters: Lightning as a Proxy Variable
NASA Technical Reports Server (NTRS)
Goodman, S. J.; Raghavan, R.; William, E.; Weber, M.; Boldi, B.; Matlin, A.; Wolfson, M.; Hodanish, S.; Sharp. D.
1997-01-01
We have gained important insights from prior studies that have suggested relationships between lightning and storm growth, decay, convective rain flux, vertical distribution of storm mass and echo volume in the region, and storm energetics. A study was initiated in the Summer of 1996 to determine how total (in-cloud plus ground) lightning observations might provide added knowledge to the forecaster in the determination and identification of severe thunderstorms and weather hazards in real-time. The Melbourne Weather Office was selected as a primary site to conduct this study because Melbourne is the only site in the world with continuous and open access to total lightning (LDAR) data and a Doppler (WSR-88D) radar. A Lightning Imaging Sensor Data Applications Demonstration (LISDAD) system was integrated into the forecaster's workstation during the Summer 1996 to allow the forecaster to interact in real-time with the multi-sensor data being displayed. LISDAD currently ingests LDAR data, the cloud-to-ground National Lightning Detection Network (NLDN) data, and the Melbourne radar data in f real-time. The interactive features provide the duty forecaster the ability to perform quick diagnostics on storm cells of interest. Upon selection of a storm cell, a pop-up box appears displaying the time-history of various storm parameters (e.g., maximum radar reflectivity, height of maximum reflectivity, echo-top height, NLDN and LDAR lightning flash rates, storm-based vertically integrated liquid water content). This product is archived to aid on detailed post-analysis.
Multidecadal Changes in Near-Global Cloud Cover and Estimated Cloud Cover Radiative Forcing
NASA Technical Reports Server (NTRS)
Norris, Joel
2005-01-01
The first paper was Multidecadal changes in near-global cloud cover and estimated cloud cover radiative forcing, by J. R. Norris (2005, J. Geophys. Res. - Atmos., 110, D08206, doi: lO.l029/2004JD005600). This study examined variability in zonal mean surface-observed upper-level (combined midlevel and high-level) and low-level cloud cover over land during 1971-1 996 and over ocean during 1952-1997. These data were averaged from individual synoptic reports in the Extended Edited Cloud Report Archive (EECRA). Although substantial interdecadal variability is present in the time series, long-term decreases in upper-level cloud cover occur over land and ocean at low and middle latitudes in both hemispheres. Near-global upper-level cloud cover declined by 1.5%-sky-cover over land between 1971 and 1996 and by 1.3%-sky-cover over ocean between 1952 and 1997. Consistency between EECRA upper-level cloud cover anomalies and those from the International Satellite Cloud Climatology Project (ISCCP) during 1984-1 997 suggests the surface-observed trends are real. The reduction in surface-observed upper-level cloud cover between the 1980s and 1990s is also consistent with the decadal increase in all-sky outgoing longwave radiation reported by the Earth Radiation Budget Satellite (EMS). Discrepancies occur between time series of EECRA and ISCCP low-level cloud cover due to identified and probable artifacts in satellite and surface cloud data. Radiative effects of surface-observed cloud cover anomalies, called "cloud cover radiative forcing (CCRF) anomalies," are estimated based on a linear relationship to climatological cloud radiative forcing per unit cloud cover. Zonal mean estimated longwave CCRF has decreased over most of the globe. Estimated shortwave CCRF has become slightly stronger over northern midlatitude oceans and slightly weaker over northern midlatitude land areas. A long-term decline in the magnitude of estimated shortwave CCRF occurs over low-latitude land and ocean, but comparison with EMS all-sky reflected shortwave radiation during 1985-1997 suggests this decrease may be underestimated.
Real-Time Very High-Resolution Regional 4D Assimilation in Supporting CRYSTAL-FACE Experiment
NASA Technical Reports Server (NTRS)
Wang, Donghai; Minnis, Patrick
2004-01-01
To better understand tropical cirrus cloud physical properties and formation processes with a view toward the successful modeling of the Earth's climate, the CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment) field experiment took place over southern Florida from 1 July to 29 July 2002. During the entire field campaign, a very high-resolution numerical weather prediction (NWP) and assimilation system was performed in support of the mission with supercomputing resources provided by NASA Center for Computational Sciences (NCCS). By using NOAA NCEP Eta forecast for boundary conditions and as a first guess for initial conditions assimilated with all available observations, two nested 15/3 km grids are employed over the CRYSTAL-FACE experiment area. The 15-km grid covers the southeast US domain, and is run two times daily for a 36-hour forecast starting at 0000 UTC and 1200 UTC. The nested 3-km grid covering only southern Florida is used for 9-hour and 18-hour forecasts starting at 1500 and 0600 UTC, respectively. The forecasting system provided more accurate and higher spatial and temporal resolution forecasts of 4-D atmospheric fields over the experiment area than available from standard weather forecast models. These forecasts were essential for flight planning during both the afternoon prior to a flight day and the morning of a flight day. The forecasts were used to help decide takeoff times and the most optimal flight areas for accomplishing the mission objectives. See more detailed products on the web site http://asd-www.larc.nasa.gov/mode/crystal. The model/assimilation output gridded data are archived on the NASA Center for Computational Sciences (NCCS) UniTree system in the HDF format at 30-min intervals for real-time forecasts or 5-min intervals for the post-mission case studies. Particularly, the data set includes the 3-D cloud fields (cloud liquid water, rain water, cloud ice, snow and graupe/hail).
Microscope-integrated optical coherence tomography for image-aided positioning of glaucoma surgery
NASA Astrophysics Data System (ADS)
Li, Xiqi; Wei, Ling; Dong, Xuechuan; Huang, Ping; Zhang, Chun; He, Yi; Shi, Guohua; Zhang, Yudong
2015-07-01
Most glaucoma surgeries involve creating new aqueous outflow pathways with the use of a small surgical instrument. This article reported a microscope-integrated, real-time, high-speed, swept-source optical coherence tomography system (SS-OCT) with a 1310-nm light source for glaucoma surgery. A special mechanism was designed to produce an adjustable system suitable for use in surgery. A two-graphic processing unit architecture was used to speed up the data processing and real-time volumetric rendering. The position of the surgical instrument can be monitored and measured using the microscope and a grid-inserted image of the SS-OCT. Finally, experiments were simulated to assess the effectiveness of this integrated system. Experimental results show that this system is a suitable positioning tool for glaucoma surgery.
NASA Astrophysics Data System (ADS)
Tavakkol, Sasan; Lynett, Patrick
2017-08-01
In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.
Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
Artist's Rendering of Multiple Whirlpools in a Sodium Gas Cloud
NASA Technical Reports Server (NTRS)
2003-01-01
This image depicts the formation of multiple whirlpools in a sodium gas cloud. Scientists who cooled the cloud and made it spin created the whirlpools in a Massachusetts Institute of Technology laboratory, as part of NASA-funded research. This process is similar to a phenomenon called starquakes that appear as glitches in the rotation of pulsars in space. MIT's Wolgang Ketterle and his colleagues, who conducted the research under a grant from the Biological and Physical Research Program through NASA's Jet Propulsion Laboratory, Pasadena, Calif., cooled the sodium gas to less than one millionth of a degree above absolute zero (-273 Celsius or -460 Fahrenheit). At such extreme cold, the gas cloud converts to a peculiar form of matter called Bose-Einstein condensate, as predicted by Albert Einstein and Satyendra Bose of India in 1927. No physical container can hold such ultra-cold matter, so Ketterle's team used magnets to keep the cloud in place. They then used a laser beam to make the gas cloud spin, a process Ketterle compares to stroking a ping-pong ball with a feather until it starts spirning. The spinning sodium gas cloud, whose volume was one- millionth of a cubic centimeter, much smaller than a raindrop, developed a regular pattern of more than 100 whirlpools.
State of the Art of Network Security Perspectives in Cloud Computing
NASA Astrophysics Data System (ADS)
Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang
Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.
CATS Version 2 Aerosol Feature Detection and Applications for Data Assimilation
NASA Technical Reports Server (NTRS)
Nowottnick, E. P.; Yorks, J. E.; Selmer, P. A.; Palm, S. P.; Hlavka, D. L.; Pauly, R. M.; Ozog, S.; McGill, M. J.; Da Silva, A.
2017-01-01
The Cloud Aerosol Transport System (CATS) lidar has been operating onboard the International Space Station (ISS) since February 2015 and provides vertical observations of clouds and aerosols using total attenuated backscatter and depolarization measurements. From February March 2015, CATS operated in Mode 1, providing backscatter and depolarization measurements at 532 and 1064 nm. CATS began operation in Mode 2 in March 2015, providing backscatter and depolarization measurements at 1064 nm and has continued to operate to the present in this mode. CATS level 2 products are derived from these measurements, including feature detection, cloud aerosol discrimination, cloud and aerosol typing, and optical properties of cloud and aerosol layers. Here, we present changes to our level 2 algorithms, which were aimed at reducing several biases in our version 1 level 2 data products. These changes will be incorporated into our upcoming version 2 level 2 data release in summer 2017. Additionally, owing to the near real time (NRT) data downlinking capabilities of the ISS, CATS provides expedited NRT data products within 6 hours of observation time. This capability provides a unique opportunity for supporting field campaigns and for developing data assimilation techniques to improve simulated cloud and aerosol vertical distributions in models. We additionally present preliminary work toward assimilating CATS observations into the NASA Goddard Earth Observing System version 5 (GEOS-5) global atmospheric model and data assimilation system.
MPL-Net data products available at co-located AERONET sites and field experiment locations
NASA Astrophysics Data System (ADS)
Welton, E. J.; Campbell, J. R.; Berkoff, T. A.
2002-05-01
Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.
Global Near Real-Time MODIS and Landsat Flood Mapping and Product Delivery
NASA Astrophysics Data System (ADS)
Policelli, F. S.; Slayback, D. A.; Tokay, M. M.; Brakenridge, G. R.
2014-12-01
Flooding is the most destructive, frequent, and costly natural disaster faced by modern society, and is increasing in frequency and damage (deaths, displacements, and financial costs) as populations increase and climate change generates more extreme weather events. When major flooding events occur, the disaster management community needs frequently updated and easily accessible information to better understand the extent of flooding and coordinate response efforts. With funding from NASA's Applied Sciences program, we developed and are now operating a near real-time global flood mapping system to help provide flood extent information within 24-48 hours of events. The principal element of the system applies a water detection algorithm to MODIS imagery, which is processed by the LANCE (Land Atmosphere Near real-time Capability for EOS) system at NASA Goddard within a few hours of satellite overpass. Using imagery from both the Terra (10:30 AM local time overpass) and Aqua (1:30 PM) platforms allows the system to deliver an initial daily assessment of flood extent by late afternoon, and more robust assessments after accumulating cloud-free imagery over several days. Cloud cover is the primary limitation in detecting surface water from MODIS imagery. Other issues include the relatively coarse scale of the MODIS imagery (250 meters) for some events, the difficulty of detecting flood waters in areas with continuous canopy cover, confusion of shadow (cloud or terrain) with water, and accurately identifying detected water as flood as opposed to normal water extent. We are working on improvements to address these limitations. We have also begun delivery of near real time water maps at 30 m resolution from Landsat imagery. Although Landsat is not available daily globally, but only every 8 days if imagery from both operating platforms (Landsat 7 and 8) is accessed, it can provide useful higher resolution data on water extent when a clear acquisition coincides with an active flood event. These data products are provided in various formats on our website, and also via live OGC (Open Geospatial Consortium) services, and ArcGIS Online accessible web maps, allowing easy access from a variety of platforms, from desktop GIS software to web browsers on mobile phones. https://oas.gsfc.nasa.gov/floodmap
Genes2WordCloud: a quick way to identify biological themes from gene lists and free text.
Baroukh, Caroline; Jenkins, Sherry L; Dannenfelser, Ruth; Ma'ayan, Avi
2011-10-13
Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications.
Genes2WordCloud: a quick way to identify biological themes from gene lists and free text
2011-01-01
Background Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Results Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Methods Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Conclusions Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications. PMID:21995939
A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937
Towards a Cloud Based Smart Traffic Management Framework
NASA Astrophysics Data System (ADS)
Rahimi, M. M.; Hakimpour, F.
2017-09-01
Traffic big data has brought many opportunities for traffic management applications. However several challenges like heterogeneity, storage, management, processing and analysis of traffic big data may hinder their efficient and real-time applications. All these challenges call for well-adapted distributed framework for smart traffic management that can efficiently handle big traffic data integration, indexing, query processing, mining and analysis. In this paper, we present a novel, distributed, scalable and efficient framework for traffic management applications. The proposed cloud computing based framework can answer technical challenges for efficient and real-time storage, management, process and analyse of traffic big data. For evaluation of the framework, we have used OpenStreetMap (OSM) real trajectories and road network on a distributed environment. Our evaluation results indicate that speed of data importing to this framework exceeds 8000 records per second when the size of datasets is near to 5 million. We also evaluate performance of data retrieval in our proposed framework. The data retrieval speed exceeds 15000 records per second when the size of datasets is near to 5 million. We have also evaluated scalability and performance of our proposed framework using parallelisation of a critical pre-analysis in transportation applications. The results show that proposed framework achieves considerable performance and efficiency in traffic management applications.
Simulation of a Real-Time Local Data Integration System over East-Central Florida
NASA Technical Reports Server (NTRS)
Case, Jonathan
1999-01-01
The Applied Meteorology Unit (AMU) simulated a real-time configuration of a Local Data Integration System (LDIS) using data from 15-28 February 1999. The objectives were to assess the utility of a simulated real-time LDIS, evaluate and extrapolate system performance to identify the hardware necessary to run a real-time LDIS, and determine the sensitivities of LDIS. The ultimate goal for running LDIS is to generate analysis products that enhance short-range (less than 6 h) weather forecasts issued in support of the 45th Weather Squadron, Spaceflight Meteorology Group, and Melbourne National Weather Service operational requirements. The simulation used the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) software on an IBM RS/6000 workstation with a 67-MHz processor. This configuration ran in real-time, but not sufficiently fast for operational requirements. Thus, the AMU recommends a workstation with a 200-MHz processor and 512 megabytes of memory to run the AMU's configuration of LDIS in real-time. This report presents results from two case studies and several data sensitivity experiments. ADAS demonstrates utility through its ability to depict high-resolution cloud and wind features in a variety of weather situations. The sensitivity experiments illustrate the influence of disparate data on the resulting ADAS analyses.
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view from the southwest looking northeast, from an altitude just above the high haze layer. The streaks in the lower cloud leading towards the hotspot are visible. The upper haze layer is mostly flat, with notable small peaks that can be matched with features in the lower cloud. In reality, these areas may represent a continuous vertical cloud column.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view to the southeast, from between the cloud layers and over the north center of the region. The tall white clouds in the lower cloud deck are probably much like large terrestrial thunderclouds. They may be regions where atmospheric water powers vertical convection over large horizontal distances.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-17
...) Real Property for the Development of Permanent Supportive Housing Facility in St. Cloud, MN AGENCY... St. Cloud Health Care System in Minnesota. The selected lessee will finance, design, develop... implementation of a business plan proposed by the Under Secretary for Health for applying the consideration under...
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Texturing of continuous LOD meshes with the hierarchical texture atlas
NASA Astrophysics Data System (ADS)
Birkholz, Hermann
2006-02-01
For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.
Web-Based Satellite Products Database for Meteorological and Climate Applications
NASA Technical Reports Server (NTRS)
Phan, Dung; Spangenberg, Douglas A.; Palikonda, Rabindra; Khaiyer, Mandana M.; Nordeen, Michele L.; Nguyen, Louis; Minnis, Patrick
2004-01-01
The need for ready access to satellite data and associated physical parameters such as cloud properties has been steadily growing. Air traffic management, weather forecasters, energy producers, and weather and climate researchers among others can utilize more satellite information than in the past. Thus, it is essential that such data are made available in near real-time and as archival products in an easy-access and user friendly environment. A host of Internet web sites currently provide a variety of satellite products for various applications. Each site has a unique contribution with appeal to a particular segment of the public and scientific community. This is no less true for the NASA Langley's Clouds and Radiation (NLCR) website (http://www-pm.larc.nasa.gov) that has been evolving over the past 10 years to support a variety of research projects This website was originally developed to display cloud products derived from the Geostationary Operational Environmental Satellite (GOES) over the Southern Great Plains for the Atmospheric Radiation Measurement (ARM) Program. It has evolved into a site providing a comprehensive database of near real-time and historical satellite products used for meteorological, aviation, and climate studies. To encourage the user community to take advantage of the site, this paper summarizes the various products and projects supported by the website and discusses future options for new datasets.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
3D image display of fetal ultrasonic images by thin shell
NASA Astrophysics Data System (ADS)
Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen
1999-05-01
Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.
NASA Astrophysics Data System (ADS)
Herman, J. R.; Boccara, M.; Albers, S. C.
2017-12-01
The Earth Polychromatic Imaging Camera (EPIC) onboard the DSCOVR satellite continuously views the sun-illuminated portion of the Earth with spectral coverage in the visible band, among others. Ideally, such a system would be able to provide a video with continuous coverage up to real time. However due to limits in onboard storage, bandwidth, and antenna coverage on the ground, we can receive at most 20 images a day, separated by at least one hour. Also, the processing time to generate the visible image out of the separate RGB channels delays public images delivery by a day or two. Finally, occasional remote tuning of instruments can cause several day periods where the imagery is completely missing. We are proposing a model-based method to fill these gaps and restore images lost in real-time processing. We are combining two sets of algorithms. The first, called Blueturn, interpolates successive images while projecting them on a 3-D model of the Earth, all this being done in real-time using the GPU. The second, called Simulated Weather Imagery (SWIM), makes EPIC-like images utilizing a ray-tracing model of scattering and absorption of sunlight by clouds, atmospheric gases, aerosols, and land surface. Clouds are obtained from 3-D gridded analyses and forecasts using weather modeling systems such as the Local Analysis and Prediction System (LAPS), and the Flow-following finite-volume Finite Icosahedral Model (FIM). SWIM uses EPIC images to validate its models. Typical model grid spacing is about 20km and is roughly commensurate with the EPIC imagery. Calculating one image per hour is enough for Blueturn to generate a smooth video. The synthetic images are designed to be visually realistic and aspire to be indistinguishable from the real ones. Resulting interframe transitions become seamless, and real-time delay is reduced to 1 hour. With Blueturn already available as a free online app, streaming EPIC images directly from NASA's public website, and with another SWIM server to ensure constant interval between key images, this work brings transcendance to EPIC's tribute. Enriched by two years of actual service in space, the most real holistic view of the Earth will be continued at a high degree of fidelity, regardless of EPIC limitations or interruptions.
Scalable and cost-effective NGS genotyping in the cloud.
Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P
2015-10-15
While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.
Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny
2017-01-01
The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.
Investigation of cloud properties and atmospheric stability with MODIS
NASA Technical Reports Server (NTRS)
Menzel, Paul
1995-01-01
In the past six months several milestones were accomplished. The MODIS Airborne Simulator (MAS) was flown in a 50 channel configuration for the first time in January 1995 and the data were calibrated and validated; in the same field campaign the approach for validating MODIS radiances using the MAS and High resolution Interferometer Sounder (HIS) instruments was successfully tested on GOES-8. Cloud masks for two scenes (one winter and the other summer) of AVHRR local area coverage from the Gulf of Mexico to Canada were processed and forwarded to the SDST for MODIS Science Team investigation; a variety of surface and cloud scenes were evident. Beta software preparations continued with incorporation of the EOS SDP Toolkit. SCAR-C data was processed and presented at the biomass burning conference. Preparations for SCAR-B accelerated with generation of a home page for access to real time satellite data related to biomass burning; this will be available to the scientists in Brazil via internet on the World Wide Web. The CO2 cloud algorithm was compared to other algorithms that differ in their construction of clear radiance fields. The HIRS global cloud climatology was completed for six years. The MODIS science team meeting was attended by five of the UW scientists.
Accuracy assessment of building point clouds automatically generated from iphone images
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2014-06-01
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
MISR Near Real Time Products Available
Atmospheric Science Data Center
2014-09-04
... containing both Ellipsoid- and Terrain-projected radiance information, and the L2 Cloud Motion Vector (CMV) product containing ... The NRT versions of MISR data products employ the same retrieval algorithms as standard production, yielding equivalent science ... product is available in HDFEOS and BUFR format. For more information, please consult the MISR CMV DPS and Documentation for the ...
Innovations in Radiotherapy Technology.
Feain, I J; Court, L; Palta, J R; Beddar, S; Keall, P
2017-02-01
Many low- and middle-income countries, together with remote and low socioeconomic populations within high-income countries, lack the resources and services to deal with cancer. The challenges in upgrading or introducing the necessary services are enormous, from screening and diagnosis to radiotherapy planning/treatment and quality assurance. There are severe shortages not only in equipment, but also in the capacity to train, recruit and retain staff as well as in their ongoing professional development via effective international peer-review and collaboration. Here we describe some examples of emerging technology innovations based on real-time software and cloud-based capabilities that have the potential to redress some of these areas. These include: (i) automatic treatment planning to reduce physics staffing shortages, (ii) real-time image-guided adaptive radiotherapy technologies, (iii) fixed-beam radiotherapy treatment units that use patient (rather than gantry) rotation to reduce infrastructure costs and staff-to-patient ratios, (iv) cloud-based infrastructure programmes to facilitate international collaboration and quality assurance and (v) high dose rate mobile cobalt brachytherapy techniques for intraoperative radiotherapy. Copyright © 2016 The Royal College of Radiologists. All rights reserved.
In vivo real-time cavitation imaging in moving organs
NASA Astrophysics Data System (ADS)
Arnal, B.; Baranger, J.; Demene, C.; Tanter, M.; Pernot, M.
2017-02-01
The stochastic nature of cavitation implies visualization of the cavitation cloud in real-time and in a discriminative manner for the safe use of focused ultrasound therapy. This visualization is sometimes possible with standard echography, but it strongly depends on the quality of the scanner, and is hindered by difficulty in discriminating from highly reflecting tissue signals in different organs. A specific approach would then permit clear validation of the cavitation position and activity. Detecting signals from a specific source with high sensitivity is a major problem in ultrasound imaging. Based on plane or diverging wave sonications, ultrafast ultrasonic imaging dramatically increases temporal resolution, and the larger amount of acquired data permits increased sensitivity in Doppler imaging. Here, we investigate a spatiotemporal singular value decomposition of ultrafast radiofrequency data to discriminate bubble clouds from tissue based on their different spatiotemporal motion and echogenicity during histotripsy. We introduce an automation to determine the parameters of this filtering. This method clearly outperforms standard temporal filtering techniques with a bubble to tissue contrast of at least 20 dB in vitro in a moving phantom and in vivo in porcine liver.
A lightweight distributed framework for computational offloading in mobile cloud computing.
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.
A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245
Image formation simulation for computer-aided inspection planning of machine vision systems
NASA Astrophysics Data System (ADS)
Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz
2017-06-01
In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.
NASA Technical Reports Server (NTRS)
Kalia, Subodh; Ganguly, Sangram; Li, Shuang; Nemani, Ramakrishna R.
2017-01-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remote sensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud shadow mask from geostationary satellite data is critical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds,which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classify cloudshadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoderdecoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multispectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes
Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.
2015-01-01
Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529
Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang
2015-04-01
Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.
Monitor weather conditions for cloud seeding control. [Colorado River Basin
NASA Technical Reports Server (NTRS)
Kahan, A. M. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The near real-time DCS platform data transfer to the time-share compare is a working reality. Six stations are now being automatically monitored and displayed with a system delay of 3 to 8 hours from time of data transmission to time of data accessibility on the computer. The DCS platform system has proven itself a valuable tool for near real-time monitoring of mountain precipitation. Data from Wolf Creek Pass were an important input in making the decision when to suspend seeding operations to avoid exceeding suspension criteria in that area. The DCS platforms, as deployed in this investigation, have proven themselves to be reliable weather resistant systems for winter mountain environments in the southern Colorado mountains.
Realistic soft tissue deformation strategies for real time surgery simulation.
Shen, Yunhe; Zhou, Xiangmin; Zhang, Nan; Tamma, Kumar; Sweet, Robert
2008-01-01
A volume-preserving deformation method (VPDM) is developed in complement with the mass-spring method (MSM) to improve the deformation quality of the MSM to model soft tissue in surgical simulation. This method can also be implemented as a stand-alone model. The proposed VPDM satisfies the Newton's laws of motion by obtaining the resultant vectors form an equilibrium condition. The proposed method has been tested in virtual surgery systems with haptic rendering demands.
2017-03-01
It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that
Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.
Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J
2011-11-01
To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.
2017-06-01
Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher resolution over a larger area than either data source alone.
Reexamination of the State of the Art Cloud Modeling Shows Real Improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muehlbauer, Andreas D.; Grabowski, Wojciech W.; Malinowski, S. P.
Following up on an almost thirty year long history of International Cloud Modeling Workshops, that started out with a meeting in Irsee, Germany in 1985, the 8th International Cloud Modeling Workshop was held in July 2012 in Warsaw, Poland. The workshop, hosted by the Institute of Geophysics at the University of Warsaw, was organized by Szymon Malinowski and his local team of students and co-chaired by Wojciech Grabowski (NCAR/MMM) and Andreas Muhlbauer (University of Washington). International Cloud Modeling Workshops have been held traditionally every four years typically during the week before the International Conference on Clouds and Precipitation (ICCP) .more » Rooted in the World Meteorological Organization’s (WMO) weather modification program, the core objectives of the Cloud Modeling Workshop have been centered at the numerical modeling of clouds, cloud microphysics, and the interactions between cloud microphysics and cloud dynamics. In particular, the goal of the workshop is to provide insight into the pertinent problems of today’s state-of-the-art of cloud modeling and to identify key deficiencies in the microphysical representation of clouds in numerical models and cloud parameterizations. In recent years, the workshop has increasingly shifted the focus toward modeling the interactions between aerosols and clouds and provided case studies to investigate both the effects of aerosols on clouds and precipitation as well as the impact of cloud and precipitation processes on aerosols. This time, about 60 (?) scientists from about 10 (?) different countries participated in the workshop and contributed with discussions, oral and poster presentations to the workshop’s plenary and breakout sessions. Several case leaders contributed to the workshop by setting up five observationally-based case studies covering a wide range of cloud types, namely, marine stratocumulus, mid-latitude squall lines, mid-latitude cirrus clouds, Arctic stratus and winter-time orographic clouds and precipitation. Interested readers are encouraged to visit the workshop website at http://www.atmos.washington.edu/~andreasm/workshop2012/ and browse through the list of case studies. The web page also provides a detailed list of participants and the workshop agenda. Aside from contributed oral and poster presentations during the workshop’s plenary sessions, parallel breakout sessions focused on presentations and discussions of the individual cases. A short summary and science highlights from each of the cases is presented below.« less
Near Real Time MISR Wind Observations for Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Mueller, K. J.; Protack, S.; Rheingans, B. E.; Hansen, E. G.; Jovanovic, V. M.; Baker, N.; Liu, J.; Val, S.
2014-12-01
The Multi-angle Imaging SpectroRadiometer (MISR) project, in association with the NASA Langley Atmospheric Science Data Center (ASDC), has this year adapted its original production software to generate near-real time (NRT) cloud-motion winds as well as radiance imagery from all nine MISR cameras. These products are made publicly available at the ASDC with a latency of less than 3 hours. Launched aboard the sun-synchronous Terra platform in 1999, the MISR instrument continues to acquire near-global, 275 m resolution, multi-angle imagery. During a single 7 minute overpass of any given area, MISR retrieves the stereoscopic height and horizontal motion of clouds from the multi-angle data, yielding meso-scale near-instantaneous wind vectors. The ongoing 15-year record of MISR height-resolved winds at 17.6 km resolution has been validated against independent data sources. Low-level winds dominate the sampling, and agree to within ±3 ms-1 of collocated GOES and other observations. Low-level wind observations are of particular interest to weather forecasting, where there is a dearth of observations suitable for assimilation, in part due to reliability concerns associated with winds whose heights are assigned by the infrared brightness temperature technique. MISR cloud heights, on the other hand, are generated from stereophotogrammetric pattern matching of visible radiances. MISR winds also address data gaps in the latitude bands between geostationary satellite coverage and polar orbiting instruments that obtain winds from multiple overpasses (e.g. MODIS). Observational impact studies conducted by the Naval Research Laboratory (NRL) and by the German Weather Service (Deutscher Wetterdienst) have both demonstrated forecast improvements when assimilating MISR winds. An impact assessment using the GEOS-5 system is currently in progress. To benefit air quality forecasts, the MISR project is currently investigating the feasibility of generating near-real time aerosol products.
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.
2014-12-01
Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud occurrence more than what is shown in Fig. 1c. These graphs show the capability of the camera to detect and measure the cloud occurrence at nighttime. Continuous collection of nighttime pictures is currently implemented. In regions where there is a dearth of scientific data, the measured nighttime cloud occurrence will serve as a baseline for future cloud studies in this part of the world.
Lightning studies using LDAR and LLP data
NASA Technical Reports Server (NTRS)
Forbes, Gregory S.
1993-01-01
This study intercompared lightning data from LDAR and LLP systems in order to learn more about the spatial relationships between thunderstorm electrical discharges aloft and lightning strikes to the surface. The ultimate goal of the study is to provide information that can be used to improve the process of real-time detection and warning of lightning by weather forecasters who issue lightning advisories. The Lightning Detection and Ranging (LDAR) System provides data on electrical discharges from thunderstorms that includes cloud-ground flashes as well as lightning aloft (within cloud, cloud-to-cloud, and sometimes emanating from cloud to clear air outside or above cloud). The Lightning Location and Protection (LLP) system detects primarily ground strikes from lightning. Thunderstorms typically produce LDAR signals aloft prior to the first ground strike, so that knowledge of preferred positions of ground strikes relative to the LDAR data pattern from a thunderstorm could allow advance estimates of enhanced ground strike threat. Studies described in the report examine the position of LLP-detected ground strikes relative to the LDAR data pattern from the thunderstorms. The report also describes other potential approaches to the use of LDAR data in the detection and forecasting of lightning ground strikes.
Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko
2010-12-01
In this paper, we present a system that estimates and visualizes muscle tensions in real time using optical motion capture and electromyography (EMG). The system overlays rendered musculoskeletal human model on top of a live video image of the subject. The subject therefore has an impression that he/she sees the muscles with tension information through the cloth and skin. The main technical challenge lies in real-time estimation of muscle tension. Since existing algorithms using mathematical optimization to distribute joint torques to muscle tensions are too slow for our purpose, we develop a new algorithm that computes a reasonable approximation of muscle tensions based on the internal connections between muscles known as neuronal binding. The algorithm can estimate the tensions of 274 muscles in only 16 ms, and the whole visualization system runs at about 15 fps. The developed system is applied to assisting sport training, and the user case studies show its usefulness. Possible applications include interfaces for assisting rehabilitation. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines
NASA Astrophysics Data System (ADS)
Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian
2016-11-01
Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.
Scavenging of black carbon in mixed phase clouds at the high alpine site Jungfraujoch
NASA Astrophysics Data System (ADS)
Cozic, J.; Verheggen, B.; Mertes, S.; Connolly, P.; Bower, K.; Petzold, A.; Baltensperger, U.; Weingartner, E.
2006-11-01
The scavenging of black carbon (BC) in liquid and mixed phase clouds was investigated during intensive experiments in winter 2004, summer 2004 and winter 2005 at the high alpine research station Jungfraujoch (3580 m a.s.l., Switzerland). Aerosol residuals were sampled behind two well characterized inlets; a total inlet which collected cloud particles (drops and ice particles) as well as interstitial aerosol particles; an interstitial inlet which collected only interstitial (unactivated) aerosol particles. BC concentrations were measured behind each of these inlets along with the submicrometer aerosol number size distribution, from which a volume concentration was derived. These measurements were complemented by in-situ measurements of cloud microphysical parameters. BC was found to be scavenged into the cloud phase to the same extent as the bulk aerosol, which suggests that BC was covered with soluble material through aging processes, rendering it more hygroscopic. The scavenged fraction of BC (FScav,BC), defined as the fraction of BC that is incorporated into cloud droplets and ice crystals, decreases with increasing cloud ice mass fraction (IMF) from FScav,BC=60% in liquid phase clouds to FScav,BC~10% in mixed-phase clouds with IMF>0.2. This is explained by the evaporation of liquid droplets in the presence of ice crystals (Wegener-Bergeron-Findeisen process), releasing BC containing cloud condensation nuclei back into the interstitial phase. In liquid clouds, the scavenged BC fraction is found to decrease with decreasing cloud liquid water content. The scavenged BC fraction is also found to decrease with increasing BC mass concentration since there is an increased competition for the available water vapour.
Remote sensing image segmentation based on Hadoop cloud platform
NASA Astrophysics Data System (ADS)
Li, Jie; Zhu, Lingling; Cao, Fubin
2018-01-01
To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj
2016-04-01
Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.
NASA Astrophysics Data System (ADS)
Miles, B.; Chepudira, K.; LaBar, W.
2017-12-01
The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next-generation large-scale and high-resolution real-time environmental monitoring networks used in domains such as hydrology, geomorphology, and geophysics, as well as management applications such as flood early warning, and regulatory enforcement.
Condensed-phase biogenic-anthropogenic interactions with implications for cold cloud formation
Charnawskas, Joseph C.; Alpert, Peter A.; Lambe, Andrew; ...
2017-01-24
Anthropogenic and biogenic gas emissions contribute to the formation of secondary organic aerosol (SOA). When present, soot particles from fossil-fuel combustion can acquire a coating of SOA. We investigate SOA-soot biogenic-anthropogenic interactions and their impact on ice nucleation in relation to the particles’ organic phase state. SOA particles were generated from the OH oxidation of naphthalene, α-pinene, longifolene, or isoprene, with or without presence of sulfate or soot particles. Corresponding particle glass transition (T g) and full deliquescence relative humidity (FDRH) were estimated by a numerical diffusion model. Longifolene SOA particles are solid-like and all biogenic SOA sulfate mixtures exhibitmore » a core-shell configuration (i.e. a sulfate-rich core coated with SOA). Biogenic SOA with or without sulfate formed ice at conditions expected for homogeneous ice nucleation in agreement with respective T g and FDRH. α-pinene SOA coated soot particles nucleated ice above the homogeneous freezing temperature with soot acting as ice nuclei (IN). At lower temperatures the α-pinene SOA coating can be semisolid inducing ice nucleation. Naphthalene SOA coated soot particles acted as IN above and below the homogeneous freezing limit, which can be explained by the presence of a highly viscous SOA phase. Our results suggest that biogenic SOA does not play a significant role in mixed-phase cloud formation and the presence of sulfate further renders this even less likely. Furthermore, anthropogenic SOA may have an enhancing effect on cloud glaciation under mixed-phase and cirrus cloud conditions compared to biogenic SOA that dominate during preindustrial times or in pristine areas.« less
Condensed-phase biogenic–anthropogenic interactions with implications for cold cloud formation
Charnawskas, Joseph C.; Alpert, Peter A.; Lambe, Andrew T.; ...
2017-01-24
Anthropogenic and biogenic gas emissions contribute to the formation of secondary organic aerosol (SOA). When present, soot particles from fossil fuel combustion can acquire a coating of SOA. We investigate SOA–soot biogenic–anthropogenic interactions and their impact on ice nucleation in relation to the particles’ organic phase state. SOA particles were generated from the OH oxidation of naphthalene, α-pinene, longifolene, or isoprene, with or without the presence of sulfate or soot particles. Corresponding particle glass transition (T g) and full deliquescence relative humidity (FDRH) were estimated using a numerical diffusion model. Longifolene SOA particles are solid-like and all biogenic SOA sulfatemore » mixtures exhibit a core–shell configuration (i.e.a sulfate-rich core coated with SOA). Biogenic SOA with or without sulfate formed ice at conditions expected for homogeneous ice nucleation, in agreement with respectiveT gand FDRH. α-pinene SOA coated soot particles nucleated ice above the homogeneous freezing temperature with soot acting as ice nuclei (IN). At lower temperatures the α-pinene SOA coating can be semisolid, inducing ice nucleation. Naphthalene SOA coated soot particles acted as ice nuclei above and below the homogeneous freezing limit, which can be explained by the presence of a highly viscous SOA phase. Our results suggest that biogenic SOA does not play a significant role in mixed-phase cloud formation and the presence of sulfate renders this even less likely. However, anthropogenic SOA may have an enhancing effect on cloud glaciation under mixed-phase and cirrus cloud conditions compared to biogenic SOA that dominate during pre-industrial times or in pristine areas.« less
Condensed-phase biogenic-anthropogenic interactions with implications for cold cloud formation.
Charnawskas, Joseph C; Alpert, Peter A; Lambe, Andrew T; Berkemeier, Thomas; O'Brien, Rachel E; Massoli, Paola; Onasch, Timothy B; Shiraiwa, Manabu; Moffet, Ryan C; Gilles, Mary K; Davidovits, Paul; Worsnop, Douglas R; Knopf, Daniel A
2017-08-24
Anthropogenic and biogenic gas emissions contribute to the formation of secondary organic aerosol (SOA). When present, soot particles from fossil fuel combustion can acquire a coating of SOA. We investigate SOA-soot biogenic-anthropogenic interactions and their impact on ice nucleation in relation to the particles' organic phase state. SOA particles were generated from the OH oxidation of naphthalene, α-pinene, longifolene, or isoprene, with or without the presence of sulfate or soot particles. Corresponding particle glass transition (T g ) and full deliquescence relative humidity (FDRH) were estimated using a numerical diffusion model. Longifolene SOA particles are solid-like and all biogenic SOA sulfate mixtures exhibit a core-shell configuration (i.e. a sulfate-rich core coated with SOA). Biogenic SOA with or without sulfate formed ice at conditions expected for homogeneous ice nucleation, in agreement with respective T g and FDRH. α-pinene SOA coated soot particles nucleated ice above the homogeneous freezing temperature with soot acting as ice nuclei (IN). At lower temperatures the α-pinene SOA coating can be semisolid, inducing ice nucleation. Naphthalene SOA coated soot particles acted as ice nuclei above and below the homogeneous freezing limit, which can be explained by the presence of a highly viscous SOA phase. Our results suggest that biogenic SOA does not play a significant role in mixed-phase cloud formation and the presence of sulfate renders this even less likely. However, anthropogenic SOA may have an enhancing effect on cloud glaciation under mixed-phase and cirrus cloud conditions compared to biogenic SOA that dominate during pre-industrial times or in pristine areas.
A secure EHR system based on hybrid clouds.
Chen, Yu-Yi; Lu, Jun-Chao; Jan, Jinn-Ke
2012-10-01
Consequently, application services rendering remote medical services and electronic health record (EHR) have become a hot topic and stimulating increased interest in studying this subject in recent years. Information and communication technologies have been applied to the medical services and healthcare area for a number of years to resolve problems in medical management. Sharing EHR information can provide professional medical programs with consultancy, evaluation, and tracing services can certainly improve accessibility to the public receiving medical services or medical information at remote sites. With the widespread use of EHR, building a secure EHR sharing environment has attracted a lot of attention in both healthcare industry and academic community. Cloud computing paradigm is one of the popular healthIT infrastructures for facilitating EHR sharing and EHR integration. In this paper, we propose an EHR sharing and integration system in healthcare clouds and analyze the arising security and privacy issues in access and management of EHRs.
Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services
NASA Astrophysics Data System (ADS)
Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.
Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability
Measurement needs guided by synthetic radar scans in high-resolution model output
NASA Astrophysics Data System (ADS)
Varble, A.; Nesbitt, S. W.; Borque, P.
2017-12-01
Microphysical and dynamical process interactions within deep convective clouds are not well understood, partly because measurement strategies often focus on statistics of cloud state rather than cloud processes. While processes cannot be directly measured, they can be inferred with sufficiently frequent and detailed scanning radar measurements focused on the life cycleof individual cloud regions. This is a primary goal of the 2018-19 DOE ARM Cloud, Aerosol, and Complex Terrain Interactions (CACTI) and NSF Remote sensing of Electrification, Lightning, And Mesoscale/microscale Processes with Adaptive Ground Observations (RELAMPAGO) field campaigns in central Argentina, where orographic deep convective initiation is frequent with some high-impact systems growing into the tallest and largest in the world. An array of fixed and mobile scanning multi-wavelength dual-polarization radars will be coupled with surface observations, sounding systems, multi-wavelength vertical profilers, and aircraft in situ measurements to characterize convective cloud life cycles and their relationship with environmental conditions. While detailed cloud processes are an observational target, the radar scan patterns that are most ideal for observing them are unclear. They depend on the locations and scales of key microphysical and dynamical processes operating within the cloud. High-resolution simulations of clouds, while imperfect, can provide information on these locations and scales that guide radar measurement needs. Radar locations are set in the model domain based on planned experiment locations, and simulatedorographic deep convective initiation and upscale growth are sampled using a number of different scans involving RHIs or PPIs with predefined elevation and azimuthal angles that approximately conform with radar range and beam width specifications. Each full scan pattern is applied to output atsingle model time steps with time step intervals that depend on the length of time required to complete each scan in the real world. The ability of different scans to detect key processes within the convective cloud life cycle are examined in connection with previous and subsequent dynamical and microphysical transitions. This work will guide strategic scan patterns that will be used during CACTI and RELAMPAGO.
Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan; Fisher, Ward; Yoksas, Tom
2015-04-01
Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high expectations from students who have grown up with smartphones and tablets. These changes are upending traditional approaches to accessing and using data and software. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable in the form of downloadable Unidata-in-a-box virtual images, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our ongoing efforts to deploy a suite of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.
Toward a web-based real-time radiation treatment planning system in a cloud computing environment.
Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei
2013-09-21
To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.
Toward a web-based real-time radiation treatment planning system in a cloud computing environment
NASA Astrophysics Data System (ADS)
Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei
2013-09-01
To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Fan; Kollias, Pavlos; Shaw, Raymond A.
Cloud droplet size distributions (CDSDs), which are related to cloud albedo and lifetime, are usually broader in warm clouds than predicted from adiabatic parcel calculations. We investigate a mechanism for the CDSD broadening using a Lagrangian bin-microphysics cloud parcel model that considers the condensational growth of cloud droplets formed on polydisperse, sub-micrometer aerosols in an adiabatic cloud parcel that undergoes vertical oscillations, such as those due to cloud circulations or turbulence. Results show that the CDSD can be broadened during condensational growth as a result of Ostwald ripening amplified by droplet deactivation and reactivation, which is consistent with Korolev (1995).more » The relative roles of the solute effect, curvature effect, deactivation and reactivation on CDSD broadening are investigated. Deactivation of smaller cloud droplets, which is due to the combination of curvature and solute effects in the downdraft region, enhances the growth of larger cloud droplets and thus contributes particles to the larger size end of the CDSD. Droplet reactivation, which occurs in the updraft region, contributes particles to the smaller size end of the CDSD. In addition, we find that growth of the largest cloud droplets strongly depends on the residence time of cloud droplet in the cloud rather than the magnitude of local variability in the supersaturation fluctuation. This is because the environmental saturation ratio is strongly buffered by smaller cloud droplets. Two necessary conditions for this CDSD broadening, which generally occur in the atmosphere, are: (1) droplets form on polydisperse aerosols of varying hygroscopicity and (2) the cloud parcel experiences upwards and downwards motions. Therefore we expect that this mechanism for CDSD broadening is possible in real clouds. Our results also suggest it is important to consider both curvature and solute effects before and after cloud droplet activation in a cloud model. The importance of this mechanism compared with other mechanisms on cloud properties should be investigated through in-situ measurements and 3-D dynamic models.« less
Yang, Fan; Kollias, Pavlos; Shaw, Raymond A.; ...
2017-12-06
Cloud droplet size distributions (CDSDs), which are related to cloud albedo and lifetime, are usually broader in warm clouds than predicted from adiabatic parcel calculations. We investigate a mechanism for the CDSD broadening using a Lagrangian bin-microphysics cloud parcel model that considers the condensational growth of cloud droplets formed on polydisperse, sub-micrometer aerosols in an adiabatic cloud parcel that undergoes vertical oscillations, such as those due to cloud circulations or turbulence. Results show that the CDSD can be broadened during condensational growth as a result of Ostwald ripening amplified by droplet deactivation and reactivation, which is consistent with Korolev (1995).more » The relative roles of the solute effect, curvature effect, deactivation and reactivation on CDSD broadening are investigated. Deactivation of smaller cloud droplets, which is due to the combination of curvature and solute effects in the downdraft region, enhances the growth of larger cloud droplets and thus contributes particles to the larger size end of the CDSD. Droplet reactivation, which occurs in the updraft region, contributes particles to the smaller size end of the CDSD. In addition, we find that growth of the largest cloud droplets strongly depends on the residence time of cloud droplet in the cloud rather than the magnitude of local variability in the supersaturation fluctuation. This is because the environmental saturation ratio is strongly buffered by smaller cloud droplets. Two necessary conditions for this CDSD broadening, which generally occur in the atmosphere, are: (1) droplets form on polydisperse aerosols of varying hygroscopicity and (2) the cloud parcel experiences upwards and downwards motions. Therefore we expect that this mechanism for CDSD broadening is possible in real clouds. Our results also suggest it is important to consider both curvature and solute effects before and after cloud droplet activation in a cloud model. The importance of this mechanism compared with other mechanisms on cloud properties should be investigated through in-situ measurements and 3-D dynamic models.« less
NASA Astrophysics Data System (ADS)
Yang, Fan; Kollias, Pavlos; Shaw, Raymond A.; Vogelmann, Andrew M.
2018-05-01
Cloud droplet size distributions (CDSDs), which are related to cloud albedo and rain formation, are usually broader in warm clouds than predicted from adiabatic parcel calculations. We investigate a mechanism for the CDSD broadening using a moving-size-grid cloud parcel model that considers the condensational growth of cloud droplets formed on polydisperse, submicrometer aerosols in an adiabatic cloud parcel that undergoes vertical oscillations, such as those due to cloud circulations or turbulence. Results show that the CDSD can be broadened during condensational growth as a result of Ostwald ripening amplified by droplet deactivation and reactivation, which is consistent with early work. The relative roles of the solute effect, curvature effect, deactivation and reactivation on CDSD broadening are investigated. Deactivation of smaller cloud droplets, which is due to the combination of curvature and solute effects in the downdraft region, enhances the growth of larger cloud droplets and thus contributes particles to the larger size end of the CDSD. Droplet reactivation, which occurs in the updraft region, contributes particles to the smaller size end of the CDSD. In addition, we find that growth of the largest cloud droplets strongly depends on the residence time of cloud droplet in the cloud rather than the magnitude of local variability in the supersaturation fluctuation. This is because the environmental saturation ratio is strongly buffered by numerous smaller cloud droplets. Two necessary conditions for this CDSD broadening, which generally occur in the atmosphere, are as follows: (1) droplets form on aerosols of different sizes, and (2) the cloud parcel experiences upwards and downwards motions. Therefore we expect that this mechanism for CDSD broadening is possible in real clouds. Our results also suggest it is important to consider both curvature and solute effects before and after cloud droplet activation in a cloud model. The importance of this mechanism compared with other mechanisms on cloud properties should be investigated through in situ measurements and 3-D dynamic models.
Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur
2012-01-01
One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Dynamic VMs placement for energy efficiency by PSO in cloud computing
NASA Astrophysics Data System (ADS)
Dashti, Seyed Ebrahim; Rahmani, Amir Masoud
2016-03-01
Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.
Minoshima, Masafumi; Kikuchi, Kazuya
Fluorescent molecules are widely used as a tool to directly visualize target biomolecules in vivo. Fluorescent probes have the advantage that desired function can be rendered based on rational design. For bone-imaging fluorescent probes in vivo, they should be delivered to bone tissue upon administration. Recently, a fluorescent probe for detecting osteoclast activity was developed. The fluorescent probe has acid-sensitive fluorescence property, specific delivery to bone tissue, and durability against laser irradiation, which enabled real-time intravital imaging of bone-resorbing osteoclasts for a long period of time.
ERIC Educational Resources Information Center
Bryson, Linda
2004-01-01
This article describes one fifth grade's participation in in NASA's S'COOL (Students' Cloud Observations On-Line) Project, making cloud observations, reporting them online, exploring weather concepts, and gleaning some of the things involved in authentic scientific research. S?COOL is part of a real scientific study of the effect of clouds on…
Overshooting cloud top, variation of tropopause and severe storm formation
NASA Technical Reports Server (NTRS)
Hung, R. J.; Smith, R. E.
1984-01-01
The development of severe multicell thunderstorms leading to the touchdown of six tornados near Pampa, TX, on May 19-20, 1982, is characterized in detail on the basis of weather maps, rawinsonde data, and radar summaries, and the results are compared with GOES rapid-scan IR images. The multicell storm cloud is shown to have formed beginning at 1945 GMT at the point of highest horizontal moisture convergence and lowest tropopause height and to have penetrated the tropopause at 2130 GMT, reaching a maximum altitude and a cloud-top black-body temperature 9 C lower than the tropopause temperature at 2245 GMT and collapsing about 20 min, when the firt tornado touched down. The value of the real-time vertical profiles provided by satellite images in predicting which severe storms will produce tornados or other violent phenomena is stressed.
A Real-Time Wireless Sweat Rate Measurement System for Physical Activity Monitoring.
Brueck, Andrew; Iftekhar, Tashfin; Stannard, Alicja B; Yelamarthi, Kumar; Kaya, Tolga
2018-02-10
There has been significant research on the physiology of sweat in the past decade, with one of the main interests being the development of a real-time hydration monitor that utilizes sweat. The contents of sweat have been known for decades; sweat provides significant information on the physiological condition of the human body. However, it is important to know the sweat rate as well, as sweat rate alters the concentration of the sweat constituents, and ultimately affects the accuracy of hydration detection. Towards this goal, a calorimetric based flow-rate detection system was built and tested to determine sweat rate in real time. The proposed sweat rate monitoring system has been validated through both controlled lab experiments (syringe pump) and human trials. An Internet of Things (IoT) platform was embedded, with the sensor using a Simblee board and Raspberry Pi. The overall prototype is capable of sending sweat rate information in real time to either a smartphone or directly to the cloud. Based on a proven theoretical concept, our overall system implementation features a pioneer device that can truly measure the rate of sweat in real time, which was tested and validated on human subjects. Our realization of the real-time sweat rate watch is capable of detecting sweat rates as low as 0.15 µL/min/cm², with an average error in accuracy of 18% compared to manual sweat rate readings.
NASA Astrophysics Data System (ADS)
di Girolamo, P.; Summa, D.; Lin, R.-F.; Maestri, T.; Rizzi, R.; Masiello, G.
2009-11-01
Raman lidar measurements performed in Potenza by the Raman lidar system BASIL in the presence of cirrus clouds are discussed. Measurements were performed on 6 September 2004 in the frame of the Italian phase of the EAQUATE Experiment. The major feature of BASIL is represented by its capability to perform high-resolution and accurate measurements of atmospheric temperature and water vapour, and consequently relative humidity, both in daytime and night-time, based on the application of the rotational and vibrational Raman lidar techniques in the UV. BASIL is also capable to provide measurements of the particle backscatter and extinction coefficient, and consequently lidar ratio (at the time of these measurements, only at one wavelength), which are fundamental to infer geometrical and microphysical properties of clouds. A case study is discussed in order to assess the capability of Raman lidars to measure humidity in presence of cirrus clouds, both below and inside the cloud. While air inside the cloud layers is observed to be always under-saturated with respect to water, both ice super-saturation and under-saturation conditions are found inside these clouds. Upper tropospheric moistening is observed below the lower cloud layer. The synergic use of the data derived from the ground based Raman Lidar and of spectral radiances measured by the NAST-I Airborne Spectrometer allows the determination of the temporal evolution of the atmospheric cooling/heating rates due to the presence of the cirrus cloud. Lidar measurements beneath the cirrus cloud layer have been interpreted using a 1-D cirrus cloud model with explicit microphysics. The 1-D simulations indicate that sedimentation-moistening has contributed significantly to the moist anomaly, but other mechanisms are also contributing. This result supports the hypothesis that the observed mid-tropospheric humidification is a real feature which is strongly influenced by the sublimation of precipitating ice crystals. Results illustrated in this study demonstrate that Raman lidars, like the one used in this study, can resolve the spatial and temporal scales required for the study of cirrus cloud microphysical processes and appear sensitive enough to reveal and quantify upper tropospheric humidification associated with cirrus cloud sublimation.
NASA Astrophysics Data System (ADS)
di Girolamo, P.; Summa, D.; Lin, R.-F.; Maestri, T.; Rizzi, R.; Masiello, G.
2009-07-01
Raman lidar measurements performed in Potenza by the Raman lidar system BASIL in the presence of cirrus clouds are discussed. Measurements were performed on 6 September 2004 in the frame of Italian phase of the EAQUATE Experiment. The major feature of BASIL is represented by its capability to perform high-resolution and accurate measurements of atmospheric temperature and water vapour, and consequently relative humidity, both in daytime and night-time, based on the application of the rotational and vibrational Raman lidar techniques in the UV. BASIL is also capable to provide measurements of the particle backscatter and extinction coefficient, and consequently lidar ratio (at the time of these measurements only at one wavelength), which are fundamental to infer geometrical and microphysical properties of clouds. A case study is discussed in order to assess the capability of Raman lidars to measure humidity in presence of cirrus clouds, both below and inside the cloud. While air inside the cloud layers is observed to be always under-saturated with respect to water, both ice super-saturation and under-saturation conditions are found inside these clouds. Upper tropospheric moistening is observed below the lower cloud layer. The synergic use of the data derived from the ground based Raman Lidar and of spectral radiances measured by the NAST-I Airborne Spectrometer allows to determine the temporal evolution of the atmospheric cooling/heating rates due to the presence of the cirrus cloud anvil. Lidar measurements beneath the cirrus cloud layer have been interpreted using a 1-D cirrus cloud model with explicit microphysics. The 1-D simulations indicates that sedimentation-moistening has contributed significantly to the moist anomaly, but other mechanisms are also contributing. This result supports the hypothesis that the observed mid-tropospheric humidification is a real feature which is strongly influenced by the sublimation of precipitating ice crystals. Results illustrated in this study demonstrate that Raman lidars, like the one used in this study, can resolve the spatial and temporal scales required for the study of cirrus cloud microphysical processes and appears sensitive enough to reveal and quantify upper tropospheric humidification associated with cirrus cloud sublimation.
CLaMS-Ice: Large-scale cirrus cloud simulations in comparison with observations
NASA Astrophysics Data System (ADS)
Costa, Anja; Rolf, Christian; Grooß, Jens-Uwe; Spichtinger, Peter; Afchine, Armin; Spelten, Nicole; Dreiling, Volker; Zöger, Martin; Krämer, Martina
2016-04-01
Cirrus clouds are an element of uncertainty in the climate system and have received increasing attention since the last IPCC reports. The interactions of different freezing mechanisms, sedimentation rates, updraft velocity fluctuations and other factors that determine the formation and evolution of those clouds is still not fully understood. Thus, a reliable representation of cirrus clouds in models representing real atmospheric conditions is still a challenging task. At last year's EGU, Rolf et al. (2015) introduced the new large-scale microphysical cirrus cloud model CLaMS-Ice: based on trajectories calculated with CLaMS (McKenna et al., 2002 and Konopka et al. 2007), it simulates the development of cirrus clouds relying on the cirrus bulk model by Spichtinger and Gierens (2009). The qualitative agreement between CLaMS-Ice simulations and observations could be demonstrated at that time. Now we present a detailed quantitative comparison between standard ECMWF products, CLaMS-Ice simulations, and in-situ measurements obtained during the ML-Cirrus campaign 2014. We discuss the agreement of the parameters temperature (observational data: BAHAMAS), relative humidity (SHARC), cloud occurrence, cloud particle concentration, ice water content and cloud particle radii (all NIXE-CAPS). Due to the precise trajectories based on ECMWF wind and temperature fields, CLaMS-Ice represents the cirrus cloud vertical and horizontal coverage more accurately than the ECMWF ice water content (IWC) fields. We demonstrate how CLaMS-Ice can be used to evaluate different input settings (e.g. amount of ice nuclei, freezing thresholds, sedimentation settings) that lead to cirrus clouds with the microphysical properties observed during ML-Cirrus (2014).
Context-aware distributed cloud computing using CloudScheduler
NASA Astrophysics Data System (ADS)
Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.
2017-10-01
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
THE INFLUENCE OF NONUNIFORM CLOUD COVER ON TRANSIT TRANSMISSION SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Line, Michael R.; Parmentier, Vivien, E-mail: mrline@ucsc.edu
2016-03-20
We model the impact of nonuniform cloud cover on transit transmission spectra. Patchy clouds exist in nearly every solar system atmosphere, brown dwarfs, and transiting exoplanets. Our major findings suggest that fractional cloud coverage can exactly mimic high mean molecular weight atmospheres and vice versa over certain wavelength regions, in particular, over the Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) bandpass (1.1–1.7 μm). We also find that patchy cloud coverage exhibits a signature that is different from uniform global clouds. Furthermore, we explain analytically why the “patchy cloud-high mean molecular weight” degeneracy exists. We also explore the degeneracy ofmore » nonuniform cloud coverage in atmospheric retrievals on both synthetic and real planets. We find from retrievals on a synthetic solar composition hot Jupiter with patchy clouds and a cloud-free high mean molecular weight warm Neptune that both cloud-free high mean molecular weight atmospheres and partially cloudy atmospheres can explain the data equally well. Another key finding is that the HST WFC3 transit transmission spectra of two well-observed objects, the hot Jupiter HD 189733b and the warm Neptune HAT-P-11b, can be explained well by solar composition atmospheres with patchy clouds without the need to invoke high mean molecular weight or global clouds. The degeneracy between high molecular weight and solar composition partially cloudy atmospheres can be broken by observing the molecular Rayleigh scattering differences between the two. Furthermore, the signature of partially cloudy limbs also appears as a ∼100 ppm residual in the ingress and egress of the transit light curves, provided that the transit timing is known to seconds.« less
NASA Astrophysics Data System (ADS)
Lin, Qinhao; Zhang, Guohua; Peng, Long; Bi, Xinhui; Wang, Xinming; Brechtel, Fred J.; Li, Mei; Chen, Duohong; Peng, Ping'an; Sheng, Guoying; Zhou, Zhen
2017-07-01
To investigate how atmospheric aerosol particles interact with chemical composition of cloud droplets, a ground-based counterflow virtual impactor (GCVI) coupled with a real-time single-particle aerosol mass spectrometer (SPAMS) was used to assess the chemical composition and mixing state of individual cloud residue particles in the Nanling Mountains (1690 m a. s. l. ), southern China, in January 2016. The cloud residues were classified into nine particle types: aged elemental carbon (EC), potassium-rich (K-rich), amine, dust, Pb, Fe, organic carbon (OC), sodium-rich (Na-rich) and Other
. The largest fraction of the total cloud residues was the aged EC type (49.3 %), followed by the K-rich type (33.9 %). Abundant aged EC cloud residues that mixed internally with inorganic salts were found in air masses from northerly polluted areas. The number fraction (NF) of the K-rich cloud residues increased within southwesterly air masses from fire activities in Southeast Asia. When air masses changed from northerly polluted areas to southwesterly ocean and livestock areas, the amine particles increased from 0.2 to 15.1 % of the total cloud residues. The dust, Fe, Pb, Na-rich and OC particle types had a low contribution (0.5-4.1 %) to the total cloud residues. Higher fraction of nitrate (88-89 %) was found in the dust and Na-rich cloud residues relative to sulfate (41-42 %) and ammonium (15-23 %). Higher intensity of nitrate was found in the cloud residues relative to the ambient particles. Compared with nonactivated particles, nitrate intensity decreased in all cloud residues except for dust type. To our knowledge, this study is the first report on in situ observation of the chemical composition and mixing state of individual cloud residue particles in China.
EARLINET: potential operationality of a research network
NASA Astrophysics Data System (ADS)
Sicard, M.; D'Amico, G.; Comerón, A.; Mona, L.; Alados-Arboledas, L.; Amodeo, A.; Baars, H.; Baldasano, J. M.; Belegante, L.; Binietoglou, I.; Bravo-Aranda, J. A.; Fernández, A. J.; Fréville, P.; García-Vizcaíno, D.; Giunta, A.; Granados-Muñoz, M. J.; Guerrero-Rascado, J. L.; Hadjimitsis, D.; Haefele, A.; Hervo, M.; Iarlori, M.; Kokkalis, P.; Lange, D.; Mamouri, R. E.; Mattis, I.; Molero, F.; Montoux, N.; Muñoz, A.; Muñoz Porcar, C.; Navas-Guzmán, F.; Nicolae, D.; Nisantzi, A.; Papagiannopoulos, N.; Papayannis, A.; Pereira, S.; Preißler, J.; Pujadas, M.; Rizi, V.; Rocadenbosch, F.; Sellegri, K.; Simeonov, V.; Tsaknakis, G.; Wagner, F.; Pappalardo, G.
2015-11-01
In the framework of ACTRIS (Aerosols, Clouds, and Trace Gases Research Infrastructure Network) summer 2012 measurement campaign (8 June-17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated in the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time, the single calculus chain (SCC) - the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products - was used. All stations sent in real-time measurements of a 1 h duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC, while the optical processing was performed in near-real time after the exercise ended. 98 and 79 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on the lidar data. The paper draws present and future SCC users' attention to the most critical parameters of the SCC product configuration and their possible optimal value but also to the limitations inherent to the raw data. The continuous use of SCC direct and derived products in heterogeneous conditions is used to demonstrate two potential applications of EARLINET infrastructure: the monitoring of a Saharan dust intrusion event and the evaluation of two dust transport models. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modeling, climate research and calibration/validation activities of spaceborne observations.
NASA Astrophysics Data System (ADS)
Tsuda, T.; Ito, N.; Takeda, Y.; Realini, E.; Shinbori, A.
2016-12-01
We employ the GNSS meteorology technique to measure precipitable water vapor (PWV) from the propagation delay of GNSS signal in the atmosphere. We installed a hyper-dense GNSS network using 15 receivers with a horizontal spacing of 1-2 km in Uji, Japan (Uji network). We also obtained precipitation with a rain gauge at a nearby operational weather station and rain cloud distribution by an X-band radar. We selected 40 days from April 2011 to March 2013, when considerable precipitation was detected. Difference in PWV within 10 km was 3-10 mm during a heavy rain. We found PWV increased 10-20 minutes before a passage of a rain cloud. The maximum value of PWV correlated well with the amount of precipitation on the ground. The variance of PWV between the GNSS sites was enhanced during a heavy rain. For a future practical hyper-dense GNSS network system with many receivers, we consider to use inexpensive single frequency (SF) receivers. Because SF receiver cannot eliminate the ionospheric delay by itself, we interpolate the delay referring the delay measured by the nearby dual frequency (DF) receivers. We investigated ionospheric delay by the Uji network, taking advantages of Quasi-Zenith Satellite System (QZSS) that gives signals at high elevation angles. During a travelling ionospheric disturbance (TID), a wavy structure with a horizontal scale of several tens km was recognized. The ionospheric delay was compensated by a linear and quadratic interpolation, then the resulting error of PWV compared with DF solution was about 1.50 mm in RMS. For a real-time estimation of PWV, we used real-time satellite clock information corrected by GEONET. Difference of PWV between the real-time analysis and the post processing with the final orbit was 0.7 mm in RMS. We estimated an overall error of PWV with a dense SF-receiver network on a real-time basis was 1.7 mm in RMS.
Scavenging of black carbon in mixed phase clouds at the high alpine site Jungfraujoch
NASA Astrophysics Data System (ADS)
Cozic, J.; Verheggen, B.; Mertes, S.; Connolly, P.; Bower, K.; Petzold, A.; Baltensperger, U.; Weingartner, E.
2007-04-01
The scavenging of black carbon (BC) in liquid and mixed phase clouds was investigated during intensive experiments in winter 2004, summer 2004 and winter 2005 at the high alpine research station Jungfraujoch (3580 m a.s.l., Switzerland). Aerosol residuals were sampled behind two well characterized inlets; a total inlet which collected cloud particles (droplets and ice particles) as well as interstitial (unactivated) aerosol particles; an interstitial inlet which collected only interstitial aerosol particles. BC concentrations were measured behind each of these inlets along with the submicrometer aerosol number size distribution, from which a volume concentration was derived. These measurements were complemented by in-situ measurements of cloud microphysical parameters. BC was found to be scavenged into the condensed phase to the same extent as the bulk aerosol, which suggests that BC was covered with soluble material through aging processes, rendering it more hygroscopic. The scavenged fraction of BC (FScav,BC), defined as the fraction of BC that is incorporated into cloud droplets and ice crystals, decreases with increasing cloud ice mass fraction (IMF) from FScav,BC=60% in liquid phase clouds to FScav,BC~5-10% in mixed-phase clouds with IMF>0.2. This can be explained by the evaporation of liquid droplets in the presence of ice crystals (Wegener-Bergeron-Findeisen process), releasing BC containing cloud condensation nuclei back into the interstitial phase. In liquid clouds, the scavenged BC fraction is found to decrease with decreasing cloud liquid water content. The scavenged BC fraction is also found to decrease with increasing BC mass concentration since there is an increased competition for the available water vapour.
NASA Astrophysics Data System (ADS)
Dietlicher, Remo; Neubauer, David; Lohmann, Ulrike
2018-04-01
A new scheme for stratiform cloud microphysics has been implemented in the ECHAM6-HAM2 general circulation model. It features a widely used description of cloud water with two categories for cloud droplets and raindrops. The unique aspect of the new scheme is the break with the traditional approach to describe cloud ice analogously. Here we parameterize cloud ice by a single category that predicts bulk particle properties (P3). This method has already been applied in a regional model and most recently also in the Community Atmosphere Model 5 (CAM5). A single cloud ice category does not rely on heuristic conversion rates from one category to another. Therefore, it is conceptually easier and closer to first principles. This work shows that a single category is a viable approach to describe cloud ice in climate models. Prognostic representation of sedimentation is achieved by a nested approach for sub-stepping the cloud microphysics scheme. This yields good results in terms of accuracy and performance as compared to simulations with high temporal resolution. Furthermore, the new scheme allows for a competition between various cloud processes and is thus able to unbiasedly represent the ice formation pathway from nucleation to growth by vapor deposition and collisions to sedimentation. Specific aspects of the P3 method are evaluated. We could not produce a purely stratiform cloud where rime growth dominates growth by vapor deposition and conclude that the lack of appropriate conditions renders the prognostic parameters associated with the rime properties unnecessary. Limitations inherent in a single category are examined.
Study on Diagnosing Three Dimensional Cloud Region
NASA Astrophysics Data System (ADS)
Cai, M., Jr.; Zhou, Y., Sr.
2017-12-01
Cloud mask and relative humidity (RH) provided by Cloudsat products from 2007 to 2008 are statistical analyzed to get RH Threshold between cloud and clear sky and its variation with height. A diagnosis method is proposed based on reanalysis data and applied to three-dimensional cloud field diagnosis of a real case. Diagnostic cloud field was compared to satellite, radar and other cloud precipitation observation. Main results are as follows. 1.Cloud region where cloud mask is bigger than 20 has a good space and time corresponding to the high value relative humidity region, which is provide by ECWMF AUX product. Statistical analysis of the RH frequency distribution within and outside cloud indicated that, distribution of RH in cloud at different height range shows single peak type, and the peak is near a RH value of 100%. Local atmospheric environment affects the RH distribution outside cloud, which leads to TH distribution vary in different region or different height. 2. RH threshold and its vertical distribution used for cloud diagnostic was analyzed from Threat Score method. The method is applied to a three dimension cloud diagnosis case study based on NCEP reanalysis data and th diagnostic cloud field is compared to satellite, radar and cloud precipitation observation on ground. It is found that, RH gradient is very big around cloud region and diagnosed cloud area by RH threshold method is relatively stable. Diagnostic cloud area has a good corresponding to updraft region. The cloud and clear sky distribution corresponds to satellite the TBB observations overall. Diagnostic cloud depth, or sum cloud layers distribution consists with optical thickness and precipitation on ground better. The cloud vertical profile reveals the relation between cloud vertical structure and weather system clearly. Diagnostic cloud distribution correspond to cloud observations on ground very well. 3. The method is improved by changing the vertical interval from altitude to temperature. The result shows that, the five factors , including TS score for clear sky, empty forecast, missed forecast, and especially TS score for cloud region and the accurate rate increased obviously. So, the RH threshold and its vertical distribution with temperature is better than with altitude. More tests and comparision should be done to assess the diagnosis method.
General purpose molecular dynamics simulations fully implemented on graphics processing units
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.
2008-05-01
Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.
Dynamical diffraction imaging (topography) with X-ray synchrotron radiation
NASA Technical Reports Server (NTRS)
Kuriyama, M.; Steiner, B. W.; Dobbyn, R. C.
1989-01-01
By contrast to electron microscopy, which yields information on the location of features in small regions of materials, X-ray diffraction imaging can portray minute deviations from perfect crystalline order over larger areas. Synchrotron radiation-based X-ray optics technology uses a highly parallel incident beam to eliminate ambiguities in the interpretation of image details; scattering phenomena previously unobserved are now readily detected. Synchrotron diffraction imaging renders high-resolution, real-time, in situ observations of materials under pertinent environmental conditions possible.
Real-time haptic cutting of high-resolution soft tissues.
Wu, Jun; Westermann, Rüdiger; Dick, Christian
2014-01-01
We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.
a Modeling Method of Fluttering Leaves Based on Point Cloud
NASA Astrophysics Data System (ADS)
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Three dimensional Visualization of Jupiter's Equatorial Region
NASA Technical Reports Server (NTRS)
1997-01-01
Frames from a three dimensional visualization of Jupiter's equatorial region. The images used cover an area of 34,000 kilometers by 11,000 kilometers (about 21,100 by 6,800 miles) near an equatorial 'hotspot' similar to the site where the probe from NASA's Galileo spacecraft entered Jupiter's atmosphere on December 7th, 1995. These features are holes in the bright, reflective, equatorial cloud layer where warmer thermal emission from Jupiter's deep atmosphere can pass through. The circulation patterns observed here along with the composition measurements from the Galileo Probe suggest that dry air may be converging and sinking over these regions, maintaining their cloud-free appearance. The bright clouds to the right of the hotspot as well as the other bright features may be examples of upwelling of moist air and condensation.
This frame is a view from above and to the south of the visualized area, showing the entire model. The entire region is overlain by a thin, transparent haze. In places the haze is high and thick, especially to the east (to the right of) the hotspot.Galileo is the first spacecraft to image Jupiter in near-infrared light (which is invisible to the human eye) using three filters at 727, 756, and 889 nanometers (nm). Because light at these three wavelengths is absorbed at different altitudes by atmospheric methane, a comparison of the resulting images reveals information about the heights of clouds in Jupiter's atmosphere. This information can be visualized by rendering cloud surfaces with the appropriate height variations.The visualization reduces Jupiter's true cloud structure to two layers. The height of a high haze layer is assumed to be proportional to the reflectivity of Jupiter at 889 nm. The height of a lower tropospheric cloud is assumed to be proportional to the reflectivity at 727 nm divided by that at 756 nm. This model is overly simplistic, but is based on more sophisticated studies of Jupiter's cloud structure. The upper and lower clouds are separated in the rendering by an arbitrary amount, and the height variations are exaggerated by a factor of 25.The lower cloud is colored using the same false color scheme used in previously released image products, assigning red, green, and blue to the 756, 727, and 889 nanometer mosaics, respectively. Light bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the lower cloud with an overlying thin haze.The images used cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees west. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers (about 930,000 miles) by the Solid State Imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://www.jpl.nasa.gov/ galileo.NASA Astrophysics Data System (ADS)
Kang, S.; Kim, K.; Park, G.; Ban, J.; Park, D.; Bae, M. S.; Shin, H. J.; Lee, M.; Seo, Y.; Choi, J.; Jung, D.; Seo, S.; Lee, T.; Kim, D. S.
2016-12-01
Aerosols have an important effect from scattering and absorbing the solar energy and indirectly by acting as cloud condensation nuclei and also some of the effects of aerosols are reduction in visibility, deterioration of human health, and deposition of pollutants to ecosystems. In various experimental results were showed that organic compounds have an important fraction from 10 to 70% of the total aerosol mass. Organic carbon contains water-soluble organic carbon (WSOC) and water insoluble organic carbon. WSOC are involved in the most unknown liquid-phase chemistry of wet aerosol and clouds. It is also worked as cloud condensation nuclei (CCN). Formation of secondary organic aerosol by chemical reaction of hydrocarbon compounds is a source of main pollution of WSOC compounds. Study of pollution source of WSOC is important method for creation process of secondary organic aerosol that completely has not studied.Analysis of WSOC is important and need to real-time measurement system for definition of chemical cause and sources. In this study, Particle-into-liquid sampler (PILS) coupled with total organic carbon (TOC) analyser and ion chromatography (PILS-TOC-IC), was used for semi-continuous measurement of WSOC and ionic compounds of PM2.5 during April-June 2016 at Baengnyeong Island Atmospheric Research Center, operated by the Korea National Institute of Environmental Research (NIER). PILS-TOC-IC can provide chemical information about real-time changes from ions composition and concentrations of WSOC and ionic compounds.
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
NASA Astrophysics Data System (ADS)
Andreyev, Andriy
2016-03-01
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS.
Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin
2015-01-01
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.
G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS
Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin
2015-01-01
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%. PMID:26504488
NASA Astrophysics Data System (ADS)
Taylor, R.; Wünsch, R.; Palouš, J.
2018-05-01
Most detected neutral atomic hydrogen (HI) at low redshift is associated with optically bright galaxies. However, a handful of HI clouds are known which appear to be optically dark and have no nearby potential progenitor galaxies, making tidal debris an unlikely explanation. In particular, 6 clouds identified by the Arecibo Galaxy Environment Survey are interesting due to the combination of their small size, isolation, and especially their broad line widths atypical of other such clouds. A recent suggestion is that these clouds exist in pressure equilibrium with the intracluster medium, with the line width arising from turbulent internal motions. Here we explore that possibility by using the FLASH code to perform a series of 3D hydro simulations. Our clouds are modelled using spherical Gaussian density profiles, embedded in a hot, low-density gas representing the intracluster medium. The simulations account for heating and cooling of the gas, and we vary the structure and strength of their internal motions. We create synthetic HI spectra, and find that none of our simulations reproduce the observed cloud parameters for longer than ˜100 Myr : the clouds either collapse, disperse, or experience rapid heating which would cause ionisation and render them undetectable to HI surveys. While the turbulent motions required to explain the high line widths generate structures which appear to be inherently unstable, making this an unlikely explanation for the observed clouds, these simulations demonstrate the importance of including the intracluster medium in any model seeking to explain the existence of these objects.
Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software
NASA Astrophysics Data System (ADS)
Costantino, D.; Angelini, M. G.; Settembrini, F.
2017-05-01
The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.