Real-time photorealistic stereoscopic rendering of fire
NASA Astrophysics Data System (ADS)
Rose, Benjamin M.; McAllister, David F.
2007-02-01
We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr
2005-09-01
We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.
High-fidelity real-time maritime scene rendering
NASA Astrophysics Data System (ADS)
Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin
2011-06-01
The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
Real time ray tracing based on shader
NASA Astrophysics Data System (ADS)
Gui, JiangHeng; Li, Min
2017-07-01
Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.
Real-time volume rendering of 4D image using 3D texture mapping
NASA Astrophysics Data System (ADS)
Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-05-01
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
Elasticity-based three dimensional ultrasound real-time volume rendering
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.
2009-02-01
Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.
An improved method of continuous LOD based on fractal theory in terrain rendering
NASA Astrophysics Data System (ADS)
Lin, Lan; Li, Lijun
2007-11-01
With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.
NASA Technical Reports Server (NTRS)
Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.
2002-01-01
Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
YaQ: an architecture for real-time navigation and rendering of varied crowds.
Maïm, Jonathan; Yersin, Barbara; Thalmann, Daniel
2009-01-01
The YaQ software platform is a complete system dedicated to real-time crowd simulation and rendering. Fitting multiple application domains, such as video games and VR, YaQ aims to provide efficient algorithms to generate crowds comprising up to thousands of varied virtual humans navigating in large-scale, global environments.
Real-time range generation for ladar hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Olson, Eric M.; Coker, Charles F.
1996-05-01
Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.
Feasibility study: real-time 3-D ultrasound imaging of the brain.
Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D
2004-10-01
We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.
Real-time volume rendering of digital medical images on an iOS device
NASA Astrophysics Data System (ADS)
Noon, Christian; Holub, Joseph; Winer, Eliot
2013-03-01
Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.
Utilization of DIRSIG in support of real-time infrared scene generation
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.; Brown, Scott D.
2000-07-01
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
Synthesis of Virtual Environments for Aircraft Community Noise Impact Studies
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Sullivan, Brenda M.
2005-01-01
A new capability has been developed for the creation of virtual environments for the study of aircraft community noise. It is applicable for use with both recorded and synthesized aircraft noise. When using synthesized noise, a three-stage process is adopted involving non-real-time prediction and synthesis stages followed by a real-time rendering stage. Included in the prediction-based source noise synthesis are temporal variations associated with changes in operational state, and low frequency fluctuations that are present under all operating conditions. Included in the rendering stage are the effects of spreading loss, absolute delay, atmospheric absorption, ground reflections, and binaural filtering. Results of prediction, synthesis and rendering stages are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T.; Momose, T.; Oku, S.
It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less
Volumetric ambient occlusion for real-time rendering and games.
Szirmay-Kalos, L; Umenhoffer, T; Toth, B; Szecsi, L; Sbert, M
2010-01-01
This new algorithm, based on GPUs, can compute ambient occlusion to inexpensively approximate global-illumination effects in real-time systems and games. The first step in deriving this algorithm is to examine how ambient occlusion relates to the physically founded rendering equation. The correspondence stems from a fuzzy membership function that defines what constitutes nearby occlusions. The next step is to develop a method to calculate ambient occlusion in real time without precomputation. The algorithm is based on a novel interpretation of ambient occlusion that measures the relative volume of the visible part of the surface's tangent sphere. The new formula's integrand has low variation and thus can be estimated accurately with a few samples.
A 3D ultrasound scanner: real time filtering and rendering algorithms.
Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M
1997-01-01
The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.
A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model
NASA Astrophysics Data System (ADS)
Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge
2016-12-01
A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
NASA Astrophysics Data System (ADS)
Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong
2006-03-01
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-05-01
We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.
Real-time generation of infrared ocean scene based on GPU
NASA Astrophysics Data System (ADS)
Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu
2007-12-01
Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine
NASA Astrophysics Data System (ADS)
Dai, X.; Xiong, H.; Zheng, X.
2012-07-01
A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.
Andrievskaia, Olga; Tangorra, Erin
2014-12-01
Contamination of rendered animal byproducts with central nervous system tissues (CNST) from animals with bovine spongiform encephalopathy is considered one of the vehicles of disease transmission. Removal from the animal feed chain of CNST originated from cattle of a specified age category, species-labeling of rendered meat products, and testing of rendered products for bovine CNST are tasks associated with the epidemiological control of bovine spongiform encephalopathy. A single-step TaqMan real-time reverse transcriptase (RRT) PCR assay was developed and evaluated for specific detection of bovine glial fibrillary acidic protein (GFAP) mRNA, a biomarker of bovine CNST, in rendered animal by-products. An internal amplification control, mammalian b -actin mRNA, was coamplified in the duplex RRT-PCR assay to monitor amplification efficiency, normalize amplification signals, and avoid false-negative results. The functionality of the GFAP mRNA RRT-PCR was assessed through analysis of laboratory-generated binary mixtures of bovine central nervous system (CNS) and muscle tissues treated under various thermal settings imitating industrial conditions. The assay was able to detect as low as 0.05 % (wt/wt) bovine brain tissue in binary mixtures heat treated at 110 to 130°C for 20 to 60 min. Further evaluation of the GFAP mRNA RRT-PCR assay involved samples of industrial rendered products of various species origin and composition obtained from commercial sources and rendering plants. Low amounts of bovine GFAP mRNA were detected in several bovine-rendered products, which was in agreement with declared species composition. An accurate estimation of CNS tissue content in industrial-rendered products was complicated due to a wide range of temperature and time settings in rendering protocols. Nevertheless, the GFAP mRNA RRT-PCR assay may be considered for bovine CNS tissue detection in rendered products in combination with other available tools (for example, animal age verification) in inspection programs.
DspaceOgreTerrain 3D Terrain Visualization Tool
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.
2012-01-01
DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Virtual Acoustics: Evaluation of Psychoacoustic Parameters
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.
High-power graphic computers for visual simulation: a real-time--rendering revolution
NASA Technical Reports Server (NTRS)
Kaiser, M. K.
1996-01-01
Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.
Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model
NASA Astrophysics Data System (ADS)
Wang, Yifan; Li, Weiran; Zhu, Qing
2018-04-01
This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
Is There Computer Graphics after Multimedia?
ERIC Educational Resources Information Center
Booth, Kellogg S.
Computer graphics has been driven by the desire to generate real-time imagery subject to constraints imposed by the human visual system. The future of computer graphics, when off-the-shelf systems have full multimedia capability and when standard computing engines render imagery faster than real-time, remains to be seen. A dedicated pipeline for…
ProteinShader: illustrative rendering of macromolecules
Weber, Joseph R
2009-01-01
Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660
High-quality slab-based intermixing method for fusion rendering of multiple medical objects.
Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil
2016-01-01
The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-02-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-01-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399
Space-time light field rendering.
Wang, Huamin; Sun, Mingxuan; Yang, Ruigang
2007-01-01
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I
NASA Astrophysics Data System (ADS)
Gonthier, David L.; Veron, Harry
1998-04-01
A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
Realtime Compositing of Procedural Facade Textures on the Gpu
NASA Astrophysics Data System (ADS)
Krecklau, L.; Kobbelt, L.
2011-09-01
The real time rendering of complex virtual city models has become more important in the last few years for many practical applications like realistic navigation or urban planning. For maximum rendering performance, the complexity of the geometry or textures can be reduced by decreasing the resolution until the data set can fully reside on the memory of the graphics card. This typically results in a low quality of the virtual city model. Alternatively, a streaming algorithm can load the high quality data set from the hard drive. However, this approach requires a large amount of persistent storage providing several gigabytes of static data. We present a system that uses a texture atlas containing atomic tiles like windows, doors or wall patterns, and that combines those elements on-the-fly directly on the graphics card. The presented approach benefits from a sophisticated randomization approach that produces lots of different facades while the grammar description itself remains small. By using a ray casting apporach, we are able to trace through transparent windows revealing procedurally generated rooms which further contributes to the realism of the rendering. The presented method enables real time rendering of city models with a high level of detail for facades while still relying on a small memory footprint.
Efficient visibility encoding for dynamic illumination in direct volume rendering.
Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas
2012-03-01
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.
a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight
NASA Astrophysics Data System (ADS)
Yao, C.; Peng, G.; Song, Y.; Duan, M.
2017-09-01
The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds
NASA Astrophysics Data System (ADS)
Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol
The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Visualization assisted by parallel processing
NASA Astrophysics Data System (ADS)
Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.
2011-01-01
This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.
Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.
Augmented Reality Comes to Physics
ERIC Educational Resources Information Center
Buesing, Mark; Cook, Michael
2013-01-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as…
D Model Visualization Enhancements in Real-Time Game Engines
NASA Astrophysics Data System (ADS)
Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.
2013-02-01
This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including DirectX 11 support. Real-time tessellation is a technique that can be applied by using such technology. Since the displacement and the resulting geometry are calculated by the GPU, the time-based execution cost of this technique is very low.
Design and implementation of a 3D ocean virtual reality and visualization engine
NASA Astrophysics Data System (ADS)
Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing
2012-12-01
In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon
2016-01-01
In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.
A Review on Real-Time 3D Ultrasound Imaging Technology
Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067
A Review on Real-Time 3D Ultrasound Imaging Technology.
Huang, Qinghua; Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515
Real-time synthetic vision cockpit display for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.
1999-07-01
Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.
Abercrombie, Robert K; Sheldon, Frederick T; Ferragut, Erik M
2014-06-24
A system evaluates reliability, performance and/or safety by automatically assessing the targeted system's requirements. A cost metric quantifies the impact of failures as a function of failure cost per unit of time. The metrics or measurements may render real-time (or near real-time) outcomes by initiating active response against one or more high ranked threats. The system may support or may be executed in many domains including physical domains, cyber security domains, cyber-physical domains, infrastructure domains, etc. or any other domains that are subject to a threat or a loss.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
Matching rendered and real world images by digital image processing
NASA Astrophysics Data System (ADS)
Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume
2010-05-01
Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.
Real-Time High-Dynamic Range Texture Mapping
2001-01-01
the renderings produced by radiosity and global illumination algorithms. As a particular example, Greg Ward’s RADIANCE synthetic imaging system [32...in soft- ware only. [26] presented a technique for performing Ward’s tone reproduction algo- rithm interactively to visualize radiosity solutions
Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon
2017-09-07
Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible.
Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon
2017-01-01
Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible. PMID:28880220
Real-time stereographic display of volumetric datasets in radiology
NASA Astrophysics Data System (ADS)
Wang, Xiao Hui; Maitz, Glenn S.; Leader, J. K.; Good, Walter F.
2006-02-01
A workstation for testing the efficacy of stereographic displays for applications in radiology has been developed, and is currently being tested on lung CT exams acquired for lung cancer screening. The system exploits pre-staged rendering to achieve real-time dynamic display of slabs, where slab thickness, axial position, rendering method, brightness and contrast are interactively controlled by viewers. Stereo presentation is achieved by use of either frame-swapping images or cross-polarizing images. The system enables viewers to toggle between alternative renderings such as one using distance-weighted ray casting by maximum-intensity-projection, which is optimal for detection of small features in many cases, and ray casting by distance-weighted averaging, for characterizing features once detected. A reporting mechanism is provided which allows viewers to use a stereo cursor to measure and mark the 3D locations of specific features of interest, after which a pop-up dialog box appears for entering findings. The system's impact on performance is being tested on chest CT exams for lung cancer screening. Radiologists' subjective assessments have been solicited for other kinds of 3D exams (e.g., breast MRI) and their responses have been positive. Objective estimates of changes in performance and efficiency, however, must await the conclusion of our study.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos
2016-04-01
This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.
A spatially augmented reality sketching interface for architectural daylighting design.
Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara
2011-01-01
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
A Nationwide Experimental Multi-Gigabit Network
2003-03-01
television and cinema , and to real- time interactive teleconferencing. There is another variable which affects this happy growth in network bandwidth and...render large scientific data sets with interactive frame rates on the desktop or in an immersive virtual reality ( VR ) environment. In our design, we
Virtual sensor models for real-time applications
NASA Astrophysics Data System (ADS)
Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin
2016-09-01
Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.
In-Situ Three-Dimensional Shape Rendering from Strain Values Obtained Through Optical Fiber Sensors
NASA Technical Reports Server (NTRS)
Chan, Hon Man (Inventor); Parker, Jr., Allen R. (Inventor)
2015-01-01
A method and system for rendering the shape of a multi-core optical fiber or multi-fiber bundle in three-dimensional space in real time based on measured fiber strain data. Three optical fiber cores arc arranged in parallel at 120.degree. intervals about a central axis. A series of longitudinally co-located strain sensor triplets, typically fiber Bragg gratings, are positioned along the length of each fiber at known intervals. A tunable laser interrogates the sensors to detect strain on the fiber cores. Software determines the strain magnitude (.DELTA.L/L) for each fiber at a given triplet, but then applies beam theory to calculate curvature, beading angle and torsion of the fiber bundle, and from there it determines the shape of the fiber in s Cartesian coordinate system by solving a series of ordinary differential equations expanded from the Frenet-Serrat equations. This approach eliminates the need for computationally time-intensive curve-tilting and allows the three-dimensional shape of the optical fiber assembly to be displayed in real-time.
Pacanowski, Romain; Salazar Celis, Oliver; Schlick, Christophe; Granier, Xavier; Poulin, Pierre; Cuyt, Annie
2012-11-01
Over the last two decades, much effort has been devoted to accurately measuring Bidirectional Reflectance Distribution Functions (BRDFs) of real-world materials and to use efficiently the resulting data for rendering. Because of their large size, it is difficult to use directly measured BRDFs for real-time applications, and fitting the most sophisticated analytical BRDF models is still a complex task. In this paper, we introduce Rational BRDF, a general-purpose and efficient representation for arbitrary BRDFs, based on Rational Functions (RFs). Using an adapted parametrization, we demonstrate how Rational BRDFs offer 1) a more compact and efficient representation using low-degree RFs, 2) an accurate fitting of measured materials with guaranteed control of the residual error, and 3) efficient importance sampling by applying the same fitting process to determine the inverse of the Cumulative Distribution Function (CDF) generated from the BRDF for use in Monte-Carlo rendering.
Technique for analyzing human respiratory process
NASA Technical Reports Server (NTRS)
Liu, F. F.
1970-01-01
Electronic system /MIRACLE 2/ places frequency and gas flow rate of the respiratory process within a common frame of reference to render them comparable and compatible with ''real clock time.'' Numerous measurements are accomplished accurately on a strict one-minute half-minute, breath-by-breath, or other period basis.
Cybertherapy 2005: A Decade of VR
2005-07-01
headphones, which delivered a soundscape updated in real time according to their movement in the virtual town. In the third condition, they were asked to...navigate in a soundscape in the absence of vision (A). The sounds were produced through tracked binaural rendering (HRTF) and were dependent upon the
Data-Driven Geospatial Visual Analytics for Real-Time Urban Flooding Decision Support
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D.; Rodriguez, A.; Marini, L.; Kooper, R.; Myers, J.; Wu, X.; Minsker, B. S.
2009-12-01
Urban flooding is responsible for the loss of life and property as well as the release of pathogens and other pollutants into the environment. Previous studies have shown that spatial distribution of intense rainfall significantly impacts the triggering and behavior of urban flooding. However, no general purpose tools yet exist for deriving rainfall data and rendering them in real-time at the resolution of hydrologic units used for analyzing urban flooding. This paper presents a new visual analytics system that derives and renders rainfall data from the NEXRAD weather radar system at the sewershed (i.e. urban hydrologic unit) scale in real-time for a Chicago stormwater management project. We introduce a lightweight Web 2.0 approach which takes advantages of scientific workflow management and publishing capabilities developed at NCSA (National Center for Supercomputing Applications), streaming data-aware semantic content management repository, web-based Google Earth/Map and time-aware KML (Keyhole Markup Language). A collection of polygon-based virtual sensors is created from the NEXRAD Level II data using spatial, temporal and thematic transformations at the sewershed level in order to produce persistent virtual rainfall data sources for the animation. Animated color-coded rainfall map in the sewershed can be played in real-time as a movie using time-aware KML inside the web browser-based Google Earth for visually analyzing the spatiotemporal patterns of the rainfall intensity in the sewershed. Such system provides valuable information for situational awareness and improved decision support during extreme storm events in an urban area. Our further work includes incorporating additional data (such as basement flooding events data) or physics-based predictive models that can be used for more integrated data-driven decision support.
NASA Astrophysics Data System (ADS)
Zhang, Kang
2011-12-01
In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy
Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca
2014-01-01
Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260
Novel Real-Time Facial Wound Recovery Synthesis Using Subsurface Scattering
Chin, Seongah
2014-01-01
We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models. PMID:25197721
NASA Astrophysics Data System (ADS)
Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.
2017-10-01
The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.
A new framework for interactive quality assessment with application to light field coding
NASA Astrophysics Data System (ADS)
Viola, Irene; Ebrahimi, Touradj
2017-09-01
In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.
New automatic mode of visualizing the colon via Cine CT
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.
2001-05-01
Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.
Villard, P F; Vidal, F P; Hunt, C; Bello, F; John, N W; Johnson, S; Gould, D A
2009-11-01
We present here a simulator for interventional radiology focusing on percutaneous transhepatic cholangiography (PTC). This procedure consists of inserting a needle into the biliary tree using fluoroscopy for guidance. The requirements of the simulator have been driven by a task analysis. The three main components have been identified: the respiration, the real-time X-ray display (fluoroscopy) and the haptic rendering (sense of touch). The framework for modelling the respiratory motion is based on kinematics laws and on the Chainmail algorithm. The fluoroscopic simulation is performed on the graphic card and makes use of the Beer-Lambert law to compute the X-ray attenuation. Finally, the haptic rendering is integrated to the virtual environment and takes into account the soft-tissue reaction force feedback and maintenance of the initial direction of the needle during the insertion. Five training scenarios have been created using patient-specific data. Each of these provides the user with variable breathing behaviour, fluoroscopic display tuneable to any device parameters and needle force feedback. A detailed task analysis has been used to design and build the PTC simulator described in this paper. The simulator includes real-time respiratory motion with two independent parameters (rib kinematics and diaphragm action), on-line fluoroscopy implemented on the Graphics Processing Unit and haptic feedback to feel the soft-tissue behaviour of the organs during the needle insertion.
Application of cellular automata approach for cloud simulation and rendering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher Immanuel, W.; Paul Mary Deborrah, S.; Samuel Selvaraj, R.
Current techniques for creating clouds in games and other real time applications produce static, homogenous clouds. These clouds, while viable for real time applications, do not exhibit an organic feel that clouds in nature exhibit. These clouds, when viewed over a time period, were able to deform their initial shape and move in a more organic and dynamic way. With cloud shape technology we should be able in the future to extend to create even more cloud shapes in real time with more forces. Clouds are an essential part of any computer model of a landscape or an animation ofmore » an outdoor scene. A realistic animation of clouds is also important for creating scenes for flight simulators, movies, games, and other. Our goal was to create a realistic animation of clouds.« less
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
NASA Astrophysics Data System (ADS)
Fehm, Thomas Felix; Deán-Ben, Xosé Luís; Razansky, Daniel
2014-10-01
Ultrasonography and optoacoustic imaging share powerful advantages related to the natural aptitude for real-time image rendering with high resolution, the hand-held operation, and lack of ionizing radiation. The two methods also possess very different yet highly complementary advantages of the mechanical and optical contrast in living tissues. Nonetheless, efficient integration of these modalities remains challenging owing to the fundamental differences in the underlying physical contrast, optimal signal acquisition, and image reconstruction approaches. We report on a method for hybrid acquisition and reconstruction of three-dimensional pulse-echo ultrasound and optoacoustic images in real time based on passive ultrasound generation with an optical absorber, thus avoiding the hardware complexity of active ultrasound generation. In this way, complete hybrid datasets are generated with a single laser interrogation pulse, resulting in simultaneous rendering of ultrasound and optoacoustic images at an unprecedented rate of 10 volumetric frames per second. Performance is subsequently showcased in phantom experiments and in-vivo measurements from a healthy human volunteer, confirming general clinical applicability of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehm, Thomas Felix; Razansky, Daniel, E-mail: dr@tum.de; Faculty of Medicine, Technische Universität München, Munich
2014-10-27
Ultrasonography and optoacoustic imaging share powerful advantages related to the natural aptitude for real-time image rendering with high resolution, the hand-held operation, and lack of ionizing radiation. The two methods also possess very different yet highly complementary advantages of the mechanical and optical contrast in living tissues. Nonetheless, efficient integration of these modalities remains challenging owing to the fundamental differences in the underlying physical contrast, optimal signal acquisition, and image reconstruction approaches. We report on a method for hybrid acquisition and reconstruction of three-dimensional pulse-echo ultrasound and optoacoustic images in real time based on passive ultrasound generation with an opticalmore » absorber, thus avoiding the hardware complexity of active ultrasound generation. In this way, complete hybrid datasets are generated with a single laser interrogation pulse, resulting in simultaneous rendering of ultrasound and optoacoustic images at an unprecedented rate of 10 volumetric frames per second. Performance is subsequently showcased in phantom experiments and in-vivo measurements from a healthy human volunteer, confirming general clinical applicability of the method.« less
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
GPU-based real-time soft tissue deformation with cutting and haptic feedback.
Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane
2010-12-01
This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Operations Concepts for Deep-Space Missions: Challenges and Opportunities
NASA Technical Reports Server (NTRS)
McCann, Robert S.
2010-01-01
Historically, manned spacecraft missions have relied heavily on real-time communication links between crewmembers and ground control for generating crew activity schedules and working time-critical off-nominal situations. On crewed missions beyond the Earth-Moon system, speed-of-light limitations will render this ground-centered concept of operations obsolete. A new, more distributed concept of operations will have to be developed in which the crew takes on more responsibility for real-time anomaly diagnosis and resolution, activity planning and replanning, and flight operations. I will discuss the innovative information technologies, human-machine interfaces, and simulation capabilities that must be developed in order to develop, test, and validate deep-space mission operations
Wood, Andrea
2013-12-01
This work explores disability in the cultural context of contemporary Japanese comics. In contrast to Western comics, Japanese manga have permeated the social fabric of Japan to the extent that vast numbers of people read manga on a daily basis. It has, in fact, become such a popular medium for visual communication that the Japanese government and education systems utilize manga as a social acculturation and teaching tool. This multibillion dollar industry is incredibly diverse, and one particularly popular genre is sports manga. However, Inoue Takehiko's award-winning manga series REAL departs from more conventional sports manga, which typically focus on able-bodied characters with sometimes exaggerated superhuman physical abilities, by adopting a more realistic approach to the world of wheelchair basketball and the people who play it. At the same time REAL explores cultural attitudes toward disability in Japanese culture-where disability is at times rendered "invisible" either through accessibility problems or lingering associations of disability and shame. It is therefore extremely significant that manga, a visual medium, is rendering disability visible-the ultimate movement from margin to center. REAL devotes considerable attention to realistically illustrating the lived experiences of its characters both on and off the court. Consequently, the series not only educates readers about wheelchair basketball but also provides compelling insight into Japanese cultural notions about masculinity, family, responsibility, and identity. The basketball players-at first marginalized by their disability-join together in the unity of a sport typically characterized by its "abledness."
Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study.
Dunn, B E; Almagro, U A; Choi, H; Sheth, N K; Arnold, J S; Recla, D L; Krupinski, E A; Graham, A R; Weinstein, R S
1997-01-01
In this retrospective study, we assess the accuracy, confidence levels, and viewing times of two generalist pathologists using both dynamic-robotic telepathology and conventional light microscopy (LM) to render diagnoses on a test set of 100 consecutive routine surgical pathology cases. The objective is to determine whether telepathology will allow a pathology group practice at a diagnostic hub to provide routine diagnostic services to a remote hospital without an on-site pathologist. For TP, glass slides were placed on the motorized stage of the robotic microscope of a telepathology system by a senior laboratory technologist in Iron Mountain, MI. Real-time control of the motorized microscope was then transferred to a pathologist in Milwaukee, WI, who viewed images of the glass slides on a video monitor. The telepathologists deferred rendering a diagnosis in 1.5% of cases. Clinically important concordance between the individual diagnoses rendered by telepathology and the "truth" diagnoses established by rereview of glass slides was 98.5%. In the telepathology mode, there were five incorrect diagnoses out of a total of 197 diagnoses. In four cases in which the telepathology diagnosis was incorrect, the pathologist's diagnosis by LM was identical to that rendered by telepathology. These represent errors of interpretation and cannot be ascribed to telepathology. The certainty of the pathologists with respect to their diagnoses was evaluated over time. Results for the first 50 cases served as baseline data. For the second 50 cases, confidence in rendering a diagnosis in the telepathology mode was essentially identical to that of making a diagnosis in the LM viewing mode. Viewing times in the telepathology mode also improved with more experience using the telepathology system. These results support the concept that an off-site pathologist using dynamic-robotic telepathology can substitute for an on-site pathologist as a service provider.
[Remote Slit Lamp Microscope Consultation System Based on Web].
Chen, Junfa; Zhuo, Yong; Liu, Zuguo; Chen, Yanping
2015-11-01
To realize the remote operation of the slit lamp microscope for department of ophthalmology consultation, and visual display the real-time status of remote slit lamp microscope, a remote slit lamp microscope consultation system based on B/S structure is designed and implemented. Through framing the slit lamp microscope on the website system, the realtime acquisition and transmission of remote control and image data is realized. The three dimensional model of the slit lamp microscope is established and rendered on the web by using WebGL technology. The practical application results can well show the real-time interactive of the remote consultation system.
NASA Astrophysics Data System (ADS)
Anstey, Josephine; Pape, Dave
2013-03-01
In this paper we discuss Mrs. Squandertime, a real-time, persistent simulation of a virtual character, her living room, and the view from her window, designed to be a wall-size, projected art installation. Through her large picture window, the eponymous Mrs. Squandertime watches the sea: boats, clouds, gulls, the tide going in and out, people on the sea wall. The hundreds of images that compose the view are drawn from historical printed sources. The program that assembles and animates these images is driven by weather, time, and tide data constantly updated from a real physical location. The character herself is rendered photographically in a series of slowly dissolving stills which correspond to the character's current behavior.
Real-time simulation of thermal shadows with EMIT
NASA Astrophysics Data System (ADS)
Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul
2016-05-01
Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.
Manga Vectorization and Manipulation with Procedural Simple Screentone.
Yao, Chih-Yuan; Hung, Shih-Hsuan; Li, Guo-Wei; Chen, I-Yu; Adhitya, Reza; Lai, Yu-Chi
2017-02-01
Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.
Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU
NASA Astrophysics Data System (ADS)
Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.
2007-03-01
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.
Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering
NASA Astrophysics Data System (ADS)
Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi
2015-04-01
This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.
Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design
NASA Technical Reports Server (NTRS)
Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.
Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.
Holub, Joseph; Winer, Eliot
2017-12-01
Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less
Analyzing Visibility Configurations.
Dachsbacher, C
2011-04-01
Many algorithms, such as level of detail rendering and occlusion culling methods, make decisions based on the degree of visibility of an object, but do not analyze the distribution, or structure, of the visible and occluded regions across surfaces. We present an efficient method to classify different visibility configurations and show how this can be used on top of existing methods based on visibility determination. We adapt co-occurrence matrices for visibility analysis and generalize them to operate on clusters of triangular surfaces instead of pixels. We employ machine learning techniques to reliably classify the thus extracted feature vectors. Our method allows perceptually motivated level of detail methods for real-time rendering applications by detecting configurations with expected visual masking. We exemplify the versatility of our method with an analysis of area light visibility configurations in ray tracing and an area-to-area visibility analysis suitable for hierarchical radiosity refinement. Initial results demonstrate the robustness, simplicity, and performance of our method in synthetic scenes, as well as real applications.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders.
Drouin, Simon; Collins, D Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders
Collins, D. Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069
A Real-Time Interactive System for Facial Makeup of Peking Opera
NASA Astrophysics Data System (ADS)
Cai, Feilong; Yu, Jinhui
In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.
Friston, Sebastian; Steed, Anthony; Tilbury, Simon; Gaydadjiev, Georgi
2016-04-01
Latency - the delay between a user's action and the response to this action - is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space - but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of ~1 ms from 'tracker to pixel'. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of ~1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system - one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours.
Augmented Reality Comes to Physics
NASA Astrophysics Data System (ADS)
Buesing, Mark; Cook, Michael
2013-04-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as Tagwhat and Star Chart (a must for astronomy class). The yellow line marking first downs in a televised football game2 and the enhanced puck that makes televised hockey easier to follow3 both use augmented reality to do the job.
Real Time Urban Acoustics Using Commerical Technologies
2011-08-01
delays, and rendering for binaural or surround sound display [2]. VibeStudio does not include propagation effects of reflections, diffusion, or...available for rending both binaural headphones displays as well as standard and arbitrary surround sound formats. For this reason, minimal detail is...provided in this paper and the reader is referred to [2]. An image illustrating a binaural display scenario and a typical surround sound setup are
Transgenic Arabidopsis Gene Expression System
NASA Technical Reports Server (NTRS)
Ferl, Robert; Paul, Anna-Lisa
2009-01-01
The Transgenic Arabidopsis Gene Expression System (TAGES) investigation is one in a pair of investigations that use the Advanced Biological Research System (ABRS) facility. TAGES uses Arabidopsis thaliana, thale cress, with sensor promoter-reporter gene constructs that render the plants as biomonitors (an organism used to determine the quality of the surrounding environment) of their environment using real-time nondestructive Green Fluorescent Protein (GFP) imagery and traditional postflight analyses.
Color rendering indices in global illumination methods
NASA Astrophysics Data System (ADS)
Geisler-Moroder, David; Dür, Arne
2009-02-01
Human perception of material colors depends heavily on the nature of the light sources used for illumination. One and the same object can cause highly different color impressions when lit by a vapor lamp or by daylight, respectively. Based on state-of-the-art colorimetric methods we present a modern approach for calculating color rendering indices (CRI), which were defined by the International Commission on Illumination (CIE) to characterize color reproduction properties of illuminants. We update the standard CIE method in three main points: firstly, we use the CIELAB color space, secondly, we apply a Bradford transformation for chromatic adaptation, and finally, we evaluate color differences using the CIEDE2000 total color difference formula. Moreover, within a real-world scene, light incident on a measurement surface is composed of a direct and an indirect part. Neumann and Schanda1 have shown for the cube model that interreflections can influence the CRI of an illuminant. We analyze how color rendering indices vary in a real-world scene with mixed direct and indirect illumination and recommend the usage of a spectral rendering engine instead of an RGB based renderer for reasons of accuracy of CRI calculations.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
A GPU-based mipmapping method for water surface visualization
NASA Astrophysics Data System (ADS)
Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan
2018-03-01
Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.
An Augmented Reality Nanomanipulator for Learning Nanophysics: The "NanoLearner" Platform
NASA Astrophysics Data System (ADS)
Marchi, Florence; Marliere, Sylvain; Florens, Jean Loup; Luciani, Annie; Chevrier, Joel
The work focuses on the description and evaluation of an augmented reality nanomanipulator, called "NanoLearner" platform used as educational tool in practical works of nanophysics. Through virtual reality associated to multisensory renderings, students are immersed in the nanoworld where they can interact in real time with a sample surface or an object, using their senses as hearing, seeing and touching. The role of each sensorial rendering in the understanding and control of the "approach-retract" interaction has been determined thanks to statistical studies obtained during the practical works. Finally, we present two extensions of the use of this innovative tool for investigating nano effects in living organisms and for allowing grand public to have access to a natural understanding of nanophenomena.
Architecture for high performance stereoscopic game rendering on Android
NASA Astrophysics Data System (ADS)
Flack, Julien; Sanderson, Hugh; Shetty, Sampath
2014-03-01
Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.
State of the "art": a taxonomy of artistic stylization techniques for images and video.
Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias
2013-05-01
This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
Tangible display systems: bringing virtual surfaces into the real world
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2012-03-01
We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.
Real-Time View Correction for Mobile Devices.
Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc
2017-11-01
We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.
Virtual probing system for medical volume data
NASA Astrophysics Data System (ADS)
Xiao, Yongfei; Fu, Yili; Wang, Shuguo
2007-12-01
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.
An image-space parallel convolution filtering algorithm based on shadow map
NASA Astrophysics Data System (ADS)
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Fast DRR generation for 2D to 3D registration on GPUs.
Tornai, Gábor János; Cserey, György; Pappas, Ion
2012-08-01
The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.
Real-time dual-band haptic music player for mobile devices.
Hwang, Inwook; Lee, Hyeseon; Choi, Seungmoon
2013-01-01
We introduce a novel dual-band haptic music player for real-time simultaneous vibrotactile playback with music in mobile devices. Our haptic music player features a new miniature dual-mode actuator that can produce vibrations consisting of two principal frequencies and a real-time vibration generation algorithm that can extract vibration commands from a music file for dual-band playback (bass and treble). The algorithm uses a "haptic equalizer" and provides plausible sound-to-touch modality conversion based on human perceptual data. In addition, we present a user study carried out to evaluate the subjective performance (precision, harmony, fun, and preference) of the haptic music player, in comparison with the current practice of bass-band-only vibrotactile playback via a single-frequency voice-coil actuator. The evaluation results indicated that the new dual-band playback outperforms the bass-only rendering, also providing several insights for further improvements. The developed system and experimental findings have implications for improving the multimedia experience with mobile devices.
NAIMA as a solution for future GMO diagnostics challenges.
Dobnik, David; Morisset, Dany; Gruden, Kristina
2010-03-01
In the field of genetically modified organism (GMO) diagnostics, real-time PCR has been the method of choice for target detection and quantification in most laboratories. Despite its numerous advantages, however, the lack of a true multiplexing option may render real-time PCR less practical in the face of future GMO detection challenges such as the multiplicity and increasing complexity of new transgenic events, as well as the repeated occurrence of unauthorized GMOs on the market. In this context, we recently reported the development of a novel multiplex quantitative DNA-based target amplification method, named NASBA implemented microarray analysis (NAIMA), which is suitable for sensitive, specific and quantitative detection of GMOs on a microarray. In this article, the performance of NAIMA is compared with that of real-time PCR, the focus being their performances in view of the upcoming challenge to detect/quantify an increasing number of possible GMOs at a sustainable cost and affordable staff effort. Finally, we present our conclusions concerning the applicability of NAIMA for future use in GMO diagnostics.
Intelligent Multi-Media Presentation Using Rhetorical Structure Theory
2015-01-01
information repeatedly, on demand, and without imposing an additional manning burden. Virtual Advisers can be delivered in several ways: as a...up text which identifies what content is to be said in addition to how that content is to be emotionally expressed. </say> <say> Using real-time...development of new rendering engines. These toolkits provide additional common underlying functionality such as: pluggable audio (via OpenAL4/JOAL5
Development and Evaluation of Sterographic Display for Lung Cancer Screening
2008-12-01
burden. Application of GPUs – With the evolution of commodity graphics processing units (GPUs) for accelerating games on personal computers, over the...units, which are designed for rendering computer games , are readily available and can be programmed to perform the kinds of real-time calculations...575-581, 1994. 12. Anderson CM, Saloner D, Tsuruda JS, Shapeero LG, Lee RE. "Artifacts in maximun-intensity-projection display of MR angiograms
Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2
NASA Astrophysics Data System (ADS)
Makar, Robert J.; O'Toole, Brian E.
1998-07-01
An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.
Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry
2012-12-01
Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.
NASA Astrophysics Data System (ADS)
Lindsey, Brooks D.; Ivancevich, Nikolas M.; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A.; Laskowitz, Daniel T.; Smith, Stephen W.
2009-02-01
We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time 3D scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64° pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128° sector, two simultaneous parasagittal images merged into a 128° × 64° C-mode plane, and a simultaneous 64° axial image. Real-time 3D color Doppler images acquired in initial clinical studies after contrast injection demonstrate flow in several representative blood vessels. An offline Doppler rendering of data from two transducers simultaneously scanning via the temporal windows provides an early visualization of the flow in vessels on both sides of the brain. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission, thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
Color-rendering indices in global illumination methods
NASA Astrophysics Data System (ADS)
Geisler-Moroder, David; Dür, Arne
2009-10-01
Human perception of material colors depends heavily on the nature of the light sources that are used for illumination. One and the same object can cause highly different color impressions when lit by a vapor lamp or by daylight, respectively. On the basis of state-of-the-art colorimetric methods, we present a modern approach for the calculation of color-rendering indices (CRI), which were defined by the International Commission on Illumination (CIE) to characterize color reproduction properties of illuminants. We update the standard CIE method in three main points: first, we use the CIELAB color space; second, we apply a linearized Bradford transformation for chromatic adaptation; and finally, we evaluate color differences using the CIEDE2000 total color difference formula. Moreover, within a real-world scene, light incident on a measurement surface is composed of a direct and an indirect part. Neumann and Schanda [Proc. CGIV'06 Conf., Leeds, UK, pp. 283-286 (2006)] have shown for the cube model that diffuse interreflections can influence the CRI of a light source. We analyze how color-rendering indices vary in a real-world scene with mixed direct and indirect illumination and recommend the usage of a spectral rendering engine instead of an RGB-based renderer for reasons of accuracy of CRI calculations.
A client–server framework for 3D remote visualization of radiotherapy treatment space
Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.
2013-01-01
Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605
NASA Astrophysics Data System (ADS)
Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.
2005-03-01
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
Nordahl, Rolf; Turchet, Luca; Serafin, Stefania
2011-09-01
We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.
An application of the MPP to the interactive manipulation of stereo images of digital terrain models
NASA Technical Reports Server (NTRS)
Pol, Sanjay; Mcallister, David; Davis, Edward
1987-01-01
Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.
A new approach to subjectively assess quality of plenoptic content
NASA Astrophysics Data System (ADS)
Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj
2016-09-01
Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.
Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert
2009-01-01
The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Real-time simulation of the nonlinear visco-elastic deformations of soft tissues.
Basafa, Ehsan; Farahmand, Farzam
2011-05-01
Mass-spring-damper (MSD) models are often used for real-time surgery simulation due to their fast response and fairly realistic deformation replication. An improved real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was developed and tested. The mechanical realization of conventional MSD models was improved using nonlinear springs and nodal dampers, while their high computational efficiency was maintained using an adapted implicit integration algorithm. New practical algorithms for model parameter tuning, collision detection, and simulation were incorporated. The model was able to replicate complex biological soft tissue mechanical properties under large deformations, i.e., the nonlinear and viscoelastic behaviors. The simulated response of the model after tuning of its parameters to the experimental data of a deer liver sample, closely tracked the reference data with high correlation and maximum relative differences of less than 5 and 10%, for the tuning and testing data sets respectively. Finally, implementation of the proposed model and algorithms in a graphical environment resulted in a real-time simulation with update rates of 150 Hz for interactive deformation and haptic manipulation, and 30 Hz for visual rendering. The proposed real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was efficient, realistic, and accurate in ex vivo testing. This model is a suitable candidate for testing in vivo during laparoscopic surgery.
Microscope-integrated optical coherence tomography for image-aided positioning of glaucoma surgery
NASA Astrophysics Data System (ADS)
Li, Xiqi; Wei, Ling; Dong, Xuechuan; Huang, Ping; Zhang, Chun; He, Yi; Shi, Guohua; Zhang, Yudong
2015-07-01
Most glaucoma surgeries involve creating new aqueous outflow pathways with the use of a small surgical instrument. This article reported a microscope-integrated, real-time, high-speed, swept-source optical coherence tomography system (SS-OCT) with a 1310-nm light source for glaucoma surgery. A special mechanism was designed to produce an adjustable system suitable for use in surgery. A two-graphic processing unit architecture was used to speed up the data processing and real-time volumetric rendering. The position of the surgical instrument can be monitored and measured using the microscope and a grid-inserted image of the SS-OCT. Finally, experiments were simulated to assess the effectiveness of this integrated system. Experimental results show that this system is a suitable positioning tool for glaucoma surgery.
NASA Astrophysics Data System (ADS)
Tavakkol, Sasan; Lynett, Patrick
2017-08-01
In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.
Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Texturing of continuous LOD meshes with the hierarchical texture atlas
NASA Astrophysics Data System (ADS)
Birkholz, Hermann
2006-02-01
For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
3D image display of fetal ultrasonic images by thin shell
NASA Astrophysics Data System (ADS)
Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen
1999-05-01
Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.
Image formation simulation for computer-aided inspection planning of machine vision systems
NASA Astrophysics Data System (ADS)
Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz
2017-06-01
In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.
Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang
2015-04-01
Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.
Realistic soft tissue deformation strategies for real time surgery simulation.
Shen, Yunhe; Zhou, Xiangmin; Zhang, Nan; Tamma, Kumar; Sweet, Robert
2008-01-01
A volume-preserving deformation method (VPDM) is developed in complement with the mass-spring method (MSM) to improve the deformation quality of the MSM to model soft tissue in surgical simulation. This method can also be implemented as a stand-alone model. The proposed VPDM satisfies the Newton's laws of motion by obtaining the resultant vectors form an equilibrium condition. The proposed method has been tested in virtual surgery systems with haptic rendering demands.
2017-03-01
It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that
Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.
Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J
2011-11-01
To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.
Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko
2010-12-01
In this paper, we present a system that estimates and visualizes muscle tensions in real time using optical motion capture and electromyography (EMG). The system overlays rendered musculoskeletal human model on top of a live video image of the subject. The subject therefore has an impression that he/she sees the muscles with tension information through the cloth and skin. The main technical challenge lies in real-time estimation of muscle tension. Since existing algorithms using mathematical optimization to distribute joint torques to muscle tensions are too slow for our purpose, we develop a new algorithm that computes a reasonable approximation of muscle tensions based on the internal connections between muscles known as neuronal binding. The algorithm can estimate the tensions of 274 muscles in only 16 ms, and the whole visualization system runs at about 15 fps. The developed system is applied to assisting sport training, and the user case studies show its usefulness. Possible applications include interfaces for assisting rehabilitation. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines
NASA Astrophysics Data System (ADS)
Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian
2016-11-01
Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.
Minoshima, Masafumi; Kikuchi, Kazuya
Fluorescent molecules are widely used as a tool to directly visualize target biomolecules in vivo. Fluorescent probes have the advantage that desired function can be rendered based on rational design. For bone-imaging fluorescent probes in vivo, they should be delivered to bone tissue upon administration. Recently, a fluorescent probe for detecting osteoclast activity was developed. The fluorescent probe has acid-sensitive fluorescence property, specific delivery to bone tissue, and durability against laser irradiation, which enabled real-time intravital imaging of bone-resorbing osteoclasts for a long period of time.
General purpose molecular dynamics simulations fully implemented on graphics processing units
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.
2008-05-01
Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.
Dynamical diffraction imaging (topography) with X-ray synchrotron radiation
NASA Technical Reports Server (NTRS)
Kuriyama, M.; Steiner, B. W.; Dobbyn, R. C.
1989-01-01
By contrast to electron microscopy, which yields information on the location of features in small regions of materials, X-ray diffraction imaging can portray minute deviations from perfect crystalline order over larger areas. Synchrotron radiation-based X-ray optics technology uses a highly parallel incident beam to eliminate ambiguities in the interpretation of image details; scattering phenomena previously unobserved are now readily detected. Synchrotron diffraction imaging renders high-resolution, real-time, in situ observations of materials under pertinent environmental conditions possible.
Real-time haptic cutting of high-resolution soft tissues.
Wu, Jun; Westermann, Rüdiger; Dick, Christian
2014-01-01
We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.
Research on Visualization of Ground Laser Radar Data Based on Osg
NASA Astrophysics Data System (ADS)
Huang, H.; Hu, C.; Zhang, F.; Xue, H.
2018-04-01
Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.
Development of a virtual speaking simulator using Image Based Rendering.
Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I
2002-01-01
The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.
Gesture Interaction Browser-Based 3D Molecular Viewer.
Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela
2016-01-01
The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
Physically Based Rendering in the Nightshade NG Visualization Platform
NASA Astrophysics Data System (ADS)
Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur
2015-01-01
This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.
Multithreaded hybrid feature tracking for markerless augmented reality.
Lee, Taehee; Höllerer, Tobias
2009-01-01
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Tarun C.; Hayee, Fariah; Baldi, Andrea
Many energy storage materials undergo large volume changes during charging and discharging. The resulting stresses often lead to defect formation in the bulk, but less so in nanosized systems. Here, we capture in real time the mechanism of one such transformation—the hydrogenation of single-crystalline palladium nanocubes from 15 to 80 nm—to better understand the reason for this durability. First, using environmental scanning transmission electron microscopy, we monitor the hydrogen absorption process in real time with 3 nm resolution. Then, using dark-field imaging, we structurally examine the reaction intermediates with 1 nm resolution. The reaction proceeds through nucleation and growth ofmore » the new phase in corners of the nanocubes. As the hydrogenated phase propagates across the particles, portions of the lattice misorient by 1.5%, diminishing crystal quality. Once transformed, all the particles explored return to a pristine state. As a result, the nanoparticles’ ability to remove crystallographic imperfections renders them more durable than their bulk counterparts.« less
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces
NASA Astrophysics Data System (ADS)
Civet, François; Le Mouélic, Stéphane
2015-04-01
During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.
High-Performance 3D Articulated Robot Display
NASA Technical Reports Server (NTRS)
Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy
2011-01-01
In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.
Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography
Bradu, Adrian; Podoleanu, Adrian Gh.
2014-01-01
Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303
Real-time improvement of continuous glucose monitoring accuracy: the smart sensor concept.
Facchinetti, Andrea; Sparacino, Giovanni; Guerra, Stefania; Luijf, Yoeri M; DeVries, J Hans; Mader, Julia K; Ellmerer, Martin; Benesch, Carsten; Heinemann, Lutz; Bruttomesso, Daniela; Avogaro, Angelo; Cobelli, Claudio
2013-04-01
Reliability of continuous glucose monitoring (CGM) sensors is key in several applications. In this work we demonstrate that real-time algorithms can render CGM sensors smarter by reducing their uncertainty and inaccuracy and improving their ability to alert for hypo- and hyperglycemic events. The smart CGM (sCGM) sensor concept consists of a commercial CGM sensor whose output enters three software modules, able to work in real time, for denoising, enhancement, and prediction. These three software modules were recently presented in the CGM literature, and here we apply them to the Dexcom SEVEN Plus continuous glucose monitor. We assessed the performance of the sCGM on data collected in two trials, each containing 12 patients with type 1 diabetes. The denoising module improves the smoothness of the CGM time series by an average of ∼57%, the enhancement module reduces the mean absolute relative difference from 15.1 to 10.3%, increases by 12.6% the pairs of values falling in the A-zone of the Clarke error grid, and finally, the prediction module forecasts hypo- and hyperglycemic events an average of 14 min ahead of time. We have introduced and implemented the sCGM sensor concept. Analysis of data from 24 patients demonstrates that incorporation of suitable real-time signal processing algorithms for denoising, enhancement, and prediction can significantly improve the performance of CGM applications. This can be of great clinical impact for hypo- and hyperglycemic alert generation as well in artificial pancreas devices.
Lightwave: An interactive estimation of indirect illumination using waves of light
NASA Astrophysics Data System (ADS)
Robertson, Michael
With the growth of computers and technology, so to has grown the desire to accurately recreate our world using computer graphics. However, our world is very complex and in many ways beyond our comprehension. Therefore, in order to perform this task, we must consider multiple disciplines and areas of research including physics, mathematics, optics, geology, and many more to at the very least approximate the world around us. The applications of being able to do this are plentiful as well, including the use of graphics in entertainment such as movies and games, in science such as weather forecasts and simulations, in medicine with body scans, or used in architecture, design, and many other areas. In order to recreate the world around us, an important task is to accurately recreate the way light travels and affects the objects we see. Rendering lighting has been a heavily researched area since the 1970's and has gotten more sophisticated over the years. Until recent developments in technology, realistic lighting of scenes has only been achievable offline taking seconds to hours or more to create a single image, however, due to advances in graphics technology, realistic lighting can be done in real-time. An important aspect of realistic lighting involves the inclusion of indirect illumination. However, to achieve a real-time rendering with indirect illumination, we must make trade-offs between scientific accuracy and performance, but as will be discussed later, scientific accuracy may not be necessary after all.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Lighting design for globally illuminated volume rendering.
Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.
Real-time CT-video registration for continuous endoscopic guidance
NASA Astrophysics Data System (ADS)
Merritt, Scott A.; Rai, Lav; Higgins, William E.
2006-03-01
Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.
Objective assessment of MPEG-2 video quality
NASA Astrophysics Data System (ADS)
Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano
2002-07-01
The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.
NASA Astrophysics Data System (ADS)
Rodrigues, Pedro L.; Rodrigues, Nuno F.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.
Optoacoustic imaging in five dimensions
NASA Astrophysics Data System (ADS)
Deán-Ben, X. L.; Gottschalk, Sven; Fehm, Thomas F.; Razansky, Daniel
2015-03-01
We report on an optoacoustic imaging system capable of acquiring volumetric multispectral optoacoustic data in real time. The system is based on simultaneous acquisition of optoacoustic signals from 256 different tomographic projections by means of a spherical matrix array. Thereby, volumetric reconstructions can be done at high frame rate, only limited by the pulse repetition rate of the laser. The developed tomographic approach presents important advantages over previously reported systems that use scanning for attaining volumetric optoacoustic data. First, dynamic processes, such as the biodistribution of optical biomarkers, can be monitored in the entire volume of interest. Second, out-of-plane and motion artifacts that could degrade the image quality when imaging living specimens can be avoided. Finally, real-time 3D performance can obviously save time required for experimental and clinical observations. The feasibility of optoacoustic imaging in five dimensions, i.e. real time acquisition of volumetric datasets at multiple wavelengths, is reported. In this way, volumetric images of spectrally resolved chromophores are rendered in real time, thus offering an unparallel imaging performance among the current bio-imaging modalities. This performance is subsequently showcased by video-rate visualization of in vivo hemodynamic changes in mouse brain and handheld visualization of blood oxygenation in deep human vessels. The newly discovered capacities open new prospects for translating the optoacoustic technology into highly performing imaging modality for biomedical research and clinical practice with multiple applications envisioned, from cardiovascular and cancer diagnostics to neuroimaging and ophthalmology.
3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.
Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S
2015-10-20
Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Price, Jennifer; Harris, Philip; Hochstetler, Bruce; Guerra, Mark; Mendez, Israel; Healy, Matthew; Khan, Ahmed
2013-01-01
International Space Station Live! (ISSLive!) is a Web application that uses a proprietary commercial technology called Lightstreamer to push data across the Internet using the standard http port (port 80). ISSLive! uses the push technology to display real-time telemetry and mission timeline data from the space station in any common Web browser or Internet- enabled mobile device. ISSLive! is designed to fill a unique niche in the education and outreach areas by providing access to real-time space station data without a physical presence in the mission control center. The technology conforms to Internet standards, supports the throughput needed for real-time space station data, and is flexible enough to work on a large number of Internet-enabled devices. ISSLive! consists of two custom components: (1) a series of data adapters that resides server-side in the mission control center at Johnson Space Center, and (2) a set of public html that renders the data pushed from the data adapters. A third component, the Lightstreamer server, is commercially available from a third party and acts as an intermediary between custom components (1) and (2). Lightstreamer also provides proprietary software libraries that are required to use the custom components. At the time of this reporting, this is the first usage of Web-based, push streaming technology in the aerospace industry.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-07-22
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-01-01
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276
Statistical tools for transgene copy number estimation based on real-time PCR.
Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal
2007-11-01
As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.
Perceptually relevant parameters for virtual listening simulation of small room acoustics
Zahorik, Pavel
2009-01-01
Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043
The new generation of OpenGL support in ROOT
NASA Astrophysics Data System (ADS)
Tadel, M.
2008-07-01
OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.
A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces.
Leonardis, Daniele; Solazzi, Massimiliano; Bortone, Ilaria; Frisoli, Antonio
2017-01-01
A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact-no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.
Rapid update of discrete Fourier transform for real-time signal processing
NASA Astrophysics Data System (ADS)
Sherlock, Barry G.; Kakad, Yogendra P.
2001-10-01
In many identification and target recognition applications, the incoming signal will have properties that render it amenable to analysis or processing in the Fourier domain. In such applications, however, it is usually essential that the identification or target recognition be performed in real time. An important constraint upon real-time processing in the Fourier domain is the time taken to perform the Discrete Fourier Transform (DFT). Ideally, a new Fourier transform should be obtained after the arrival of every new data point. However, the Fast Fourier Transform (FFT) algorithm requires on the order of N log2 N operations, where N is the length of the transform, and this usually makes calculation of the transform for every new data point computationally prohibitive. In this paper, we develop an algorithm to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computational order by a factor of log2 N. The algorithm can be modified to work in the presence of data window functions. This is a considerable advantage, because windowing is often necessary to reduce edge effects that occur because the implicit periodicity of the Fourier transform is not exhibited by the real-world signal. Versions are developed in this paper for use with the boxcar window, the split triangular, Hanning, Hamming, and Blackman windows. Generalization of these results to 2D is also presented.
Plugin free remote visualization in the browser
NASA Astrophysics Data System (ADS)
Tamm, Georg; Slusallek, Philipp
2015-01-01
Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.
ISS Radiation Shielding and Acoustic Simulation Using an Immersive Environment
NASA Technical Reports Server (NTRS)
Verhage, Joshua E.; Sandridge, Chris A.; Qualls, Garry D.; Rizzi, Stephen A.
2002-01-01
The International Space Station Environment Simulator (ISSES) is a virtual reality application that uses high-performance computing, graphics, and audio rendering to simulate the radiation and acoustic environments of the International Space Station (ISS). This CAVE application allows the user to maneuver to different locations inside or outside of the ISS and interactively compute and display the radiation dose at a point. The directional dose data is displayed as a color-mapped sphere that indicates the relative levels of radiation from all directions about the center of the sphere. The noise environment is rendered in real time over headphones or speakers and includes non-spatial background noise, such as air-handling equipment, and spatial sounds associated with specific equipment racks, such as compressors or fans. Changes can be made to equipment rack locations that produce changes in both the radiation shielding and system noise. The ISSES application allows for interactive investigation and collaborative trade studies between radiation shielding and noise for crew safety and comfort.
An Incremental Weighted Least Squares Approach to Surface Lights Fields
NASA Astrophysics Data System (ADS)
Coombe, Greg; Lastra, Anselmo
An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.
Fischer, Christoph; Domer, Benno; Wibmer, Thomas; Penzel, Thomas
2017-03-01
Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts' annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles
Narayan, Tarun C.; Hayee, Fariah; Baldi, Andrea; Leen Koh, Ai; Sinclair, Robert; Dionne, Jennifer A.
2017-01-01
Many energy storage materials undergo large volume changes during charging and discharging. The resulting stresses often lead to defect formation in the bulk, but less so in nanosized systems. Here, we capture in real time the mechanism of one such transformation—the hydrogenation of single-crystalline palladium nanocubes from 15 to 80 nm—to better understand the reason for this durability. First, using environmental scanning transmission electron microscopy, we monitor the hydrogen absorption process in real time with 3 nm resolution. Then, using dark-field imaging, we structurally examine the reaction intermediates with 1 nm resolution. The reaction proceeds through nucleation and growth of the new phase in corners of the nanocubes. As the hydrogenated phase propagates across the particles, portions of the lattice misorient by 1.5%, diminishing crystal quality. Once transformed, all the particles explored return to a pristine state. The nanoparticles' ability to remove crystallographic imperfections renders them more durable than their bulk counterparts. PMID:28091597
Electrical Capacitance Volume Tomography: Design and Applications
Wang, Fei; Marashdeh, Qussai; Fan, Liang-Shih; Warsito, Warsito
2010-01-01
This article reports recent advances and progress in the field of electrical capacitance volume tomography (ECVT). ECVT, developed from the two-dimensional electrical capacitance tomography (ECT), is a promising non-intrusive imaging technology that can provide real-time three-dimensional images of the sensing domain. Images are reconstructed from capacitance measurements acquired by electrodes placed on the outside boundary of the testing vessel. In this article, a review of progress on capacitance sensor design and applications to multi-phase flows is presented. The sensor shape, electrode configuration, and the number of electrodes that comprise three key elements of three-dimensional capacitance sensors are illustrated. The article also highlights applications of ECVT sensors on vessels of various sizes from 1 to 60 inches with complex geometries. Case studies are used to show the capability and validity of ECVT. The studies provide qualitative and quantitative real-time three-dimensional information of the measuring domain under study. Advantages of ECVT render it a favorable tool to be utilized for industrial applications and fundamental multi-phase flow research. PMID:22294905
Binary-space-partitioned images for resolving image-based visibility.
Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J
2004-01-01
We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.
Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles
Narayan, Tarun C.; Hayee, Fariah; Baldi, Andrea; ...
2017-01-16
Many energy storage materials undergo large volume changes during charging and discharging. The resulting stresses often lead to defect formation in the bulk, but less so in nanosized systems. Here, we capture in real time the mechanism of one such transformation—the hydrogenation of single-crystalline palladium nanocubes from 15 to 80 nm—to better understand the reason for this durability. First, using environmental scanning transmission electron microscopy, we monitor the hydrogen absorption process in real time with 3 nm resolution. Then, using dark-field imaging, we structurally examine the reaction intermediates with 1 nm resolution. The reaction proceeds through nucleation and growth ofmore » the new phase in corners of the nanocubes. As the hydrogenated phase propagates across the particles, portions of the lattice misorient by 1.5%, diminishing crystal quality. Once transformed, all the particles explored return to a pristine state. As a result, the nanoparticles’ ability to remove crystallographic imperfections renders them more durable than their bulk counterparts.« less
Choi, Dong-hak; Hiro-Oka, Hideaki; Shimizu, Kimiya; Ohbayashi, Kohji
2012-01-01
An ultrafast frequency domain optical coherence tomography system was developed at A-scan rates between 2.5 and 10 MHz, a B-scan rate of 4 or 8 kHz, and volume-rates between 12 and 41 volumes/second. In the case of the worst duty ratio of 10%, the averaged A-scan rate was 1 MHz. Two optical demultiplexers at a center wavelength of 1310 nm were used for linear-k spectral dispersion and simultaneous differential signal detection at 320 wavelengths. The depth-range, sensitivity, sensitivity roll-off by 6 dB, and axial resolution were 4 mm, 97 dB, 6 mm, and 23 μm, respectively. Using FPGAs for FFT and a GPU for volume rendering, a real-time 4D display was demonstrated at a rate up to 41 volumes/second for an image size of 256 (axial) × 128 × 128 (lateral) voxels. PMID:23243560
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device
NASA Astrophysics Data System (ADS)
Färber, Matthias; Heller, Julika; Handels, Heinz
2007-03-01
The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.
Reflective and refractive objects for mixed reality.
Knecht, Martin; Traxler, Christoph; Winklhofer, Christoph; Wimmer, Michael
2013-04-01
In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios.
Immersive Photography Renders 360 degree Views
NASA Technical Reports Server (NTRS)
2008-01-01
An SBIR contract through Langley Research Center helped Interactive Pictures Corporation, of Knoxville, Tennessee, create an innovative imaging technology. This technology is a video imaging process that allows real-time control of live video data and can provide users with interactive, panoramic 360 views. The camera system can see in multiple directions, provide up to four simultaneous views, each with its own tilt, rotation, and magnification, yet it has no moving parts, is noiseless, and can respond faster than the human eye. In addition, it eliminates the distortion caused by a fisheye lens, and provides a clear, flat view of each perspective.
Anesthesiology training using 3D imaging and virtual reality
NASA Astrophysics Data System (ADS)
Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.
1996-04-01
Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan
2015-09-01
Surgical simulators need to simulate interactive cutting of deformable objects in real time. The goal of this work was to design an interactive cutting algorithm that eliminates traditional cutting state classification and can work simultaneously with real-time GPU-accelerated deformation without affecting its numerical stability. A modified virtual node method for cutting is proposed. Deformable object is modeled as a real tetrahedral mesh embedded in a virtual tetrahedral mesh, and the former is used for graphics rendering and collision, while the latter is used for deformation. Cutting algorithm first subdivides real tetrahedrons to eliminate all face and edge intersections, then splits faces, edges and vertices along cutting tool trajectory to form cut surfaces. Next virtual tetrahedrons containing more than one connected real tetrahedral fragments are duplicated, and connectivity between virtual tetrahedrons is updated. Finally, embedding relationship between real and virtual tetrahedral meshes is updated. Co-rotational linear finite element method is used for deformation. Cutting and collision are processed by CPU, while deformation is carried out by GPU using OpenCL. Efficiency of GPU-accelerated deformation algorithm was tested using block models with varying numbers of tetrahedrons. Effectiveness of our cutting algorithm under multiple cuts and self-intersecting cuts was tested using a block model and a cylinder model. Cutting of a more complex liver model was performed, and detailed performance characteristics of cutting, deformation and collision were measured and analyzed. Our cutting algorithm can produce continuous cut surfaces when traditional minimal element creation algorithm fails. Our GPU-accelerated deformation algorithm remains stable with constant time step under multiple arbitrary cuts and works on both NVIDIA and AMD GPUs. GPU-CPU speed ratio can be as high as 10 for models with 80,000 tetrahedrons. Forty to sixty percent real-time performance and 100-200 Hz simulation rate are achieved for the liver model with 3,101 tetrahedrons. Major bottlenecks for simulation efficiency are cutting, collision processing and CPU-GPU data transfer. Future work needs to improve on these areas.
General approach to boat simulation in virtual reality systems
NASA Astrophysics Data System (ADS)
Aranov, Vladislav Y.; Belyaev, Sergey Y.
2002-02-01
The paper is dedicated to real time simulation of sport boats, particularly a kayak and high-speed skimming boat, for training goals. This training is issue of the day, since kayaking and riding a high-speed skimming boat are both extreme sports. Participating in such types of competitions puts sportsmen into danger, particularly due to rapids, waterfalls, different water streams, and other obstacles. In order to make the simulation realistic, it is necessary to calculate data for at least 30 frames per second. These calculations may take not more than 5% CPU time, because very time-consuming 3D rendering process takes the rest - 95% CPU time. This paper describes an approach for creating minimal boat simulator models that satisfy the mentioned requirements. Besides, this approach can be used for other watercraft models of this kind.
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
The Role of Presented Objects in Deriving Color Preference Criteria from Psychophysical Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royer, Michael P.; Wei, Minchen
Of the many “components” of a color rendering measure, one is perhaps the most important: the set of color samples (spectral reflectance functions) that are employed as a standardized means of evaluating and rating a light source. At the same time, a standardized set of color samples can never apply perfectly to a real space or a real set of observed objects, meaning there will always be some level of mismatch between the predicted and observed color shifts. This mismatch is important for lighting specifiers to consider, but even more critical for experiments that seek to evaluate the relationship betweenmore » color rendering measures and human perception. This article explores how the color distortions of three possible experimental object sets compare to the color distortions predicted using the color evaluation samples of IES TM-30-15 (TM-30). The experimental object sets include those from Royer and colleagues [2016], a set of produce (10 fruits and vegetables), and the X-rite Color Checker Classic. The differences are traced back to properties of the samples sets, such as the coverage of color space, average chroma level, and specific spectral features. The consequence of the differences, that the visual evaluation is based on color distortions that are substantially different from what is predicted, can lead to inaccurate criteria or models of a given perception, such as preference. To minimize the error in using criteria or models when specifying color rendering attributes for a given application, the criteria or models should be developed using a set of experimental objects that matches the typical objects of the application as closely as possible. Alternatively, if typical objects of an application cannot be reasonably determined, an object set that matches the distortions predicted by TM-30 as close as possible is likely to provide the most meaningful results.« less
cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets
Le Muzic, Mathieu; Autin, Ludovic; Parulek, Julius; Viola, Ivan
2017-01-01
In this article we introduce cellVIEW, a new system to interactively visualize large biomolecular datasets on the atomic level. Our tool is unique and has been specifically designed to match the ambitions of our domain experts to model and interactively visualize structures comprised of several billions atom. The cellVIEW system integrates acceleration techniques to allow for real-time graphics performance of 60 Hz display rate on datasets representing large viruses and bacterial organisms. Inspired by the work of scientific illustrators, we propose a level-of-detail scheme which purpose is two-fold: accelerating the rendering and reducing visual clutter. The main part of our datasets is made out of macromolecules, but it also comprises nucleic acids strands which are stored as sets of control points. For that specific case, we extend our rendering method to support the dynamic generation of DNA strands directly on the GPU. It is noteworthy that our tool has been directly implemented inside a game engine. We chose to rely on a third party engine to reduce software development work-load and to make bleeding-edge graphics techniques more accessible to the end-users. To our knowledge cellVIEW is the only suitable solution for interactive visualization of large bimolecular landscapes on the atomic level and is freely available to use and extend. PMID:29291131
Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-09-01
This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Web Extensible Display Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slominski, Ryan; Larrieu, Theodore L.
Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a webmore » gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.« less
Mapping the hinterland: Data issues in open science.
Grand, Ann; Wilkinson, Clare; Bultitude, Karen; Winfield, Alan F T
2016-01-01
Open science is a practice in which the scientific process is shared completely and in real time. It offers the potential to support information flow, collaboration and dialogue among professional and non-professional participants. Using semi-structured interviews and case studies, this research investigated the relationship between open science and public engagement. This article concentrates on three particular areas of concern that emerged: first, how to effectively contextualise and narrate information to render it accessible, as opposed to simply available; second, concerns about data quantity and quality; and third, concerns about the skills required for effective contextualisation, mapping and interpretation of information. © The Author(s) 2014.
Transient thermal camouflage and heat signature control
NASA Astrophysics Data System (ADS)
Yang, Tian-Zhi; Su, Yishu; Xu, Weikai; Yang, Xiao-Dong
2016-09-01
Thermal metamaterials have been proposed to manipulate heat flux as a new way to cloak or camouflage objects in the infrared world. To date, however, thermal metamaterials only operate in the steady-state and exhibit detectable, transient heat signatures. In this letter, the theoretical basis for a thermal camouflaging technique with controlled transient diffusion is presented. This technique renders an object invisible in real time. More importantly, the thermal camouflaging device instantaneously generates a pre-designed heat signature and behaves as a perfect thermal illusion device. A metamaterial coating with homogeneous and isotropic thermal conductivity, density, and volumetric heat capacity was fabricated and very good camouflaging performance was achieved.
New virtual laboratories presenting advanced motion control concepts
NASA Astrophysics Data System (ADS)
Goubej, Martin; Krejčí, Alois; Reitinger, Jan
2015-11-01
The paper deals with development of software framework for rapid generation of remote virtual laboratories. Client-server architecture is chosen in order to employ real-time simulation core which is running on a dedicated server. Ordinary web browser is used as a final renderer to achieve hardware independent solution which can be run on different target platforms including laptops, tablets or mobile phones. The provided toolchain allows automatic generation of the virtual laboratory source code from the configuration file created in the open- source Inkscape graphic editor. Three virtual laboratories presenting advanced motion control algorithms have been developed showing the applicability of the proposed approach.
Mapping the hinterland: Data issues in open science
Grand, Ann; Wilkinson, Clare; Bultitude, Karen; Winfield, Alan F. T.
2016-01-01
Open science is a practice in which the scientific process is shared completely and in real time. It offers the potential to support information flow, collaboration and dialogue among professional and non-professional participants. Using semi-structured interviews and case studies, this research investigated the relationship between open science and public engagement. This article concentrates on three particular areas of concern that emerged: first, how to effectively contextualise and narrate information to render it accessible, as opposed to simply available; second, concerns about data quantity and quality; and third, concerns about the skills required for effective contextualisation, mapping and interpretation of information. PMID:24769860
Transformation-aware perceptual image metric
NASA Astrophysics Data System (ADS)
Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter
2016-09-01
Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax.
A Framework for Realistic Modeling and Display of Object Surface Appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.
With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.
NASA Technical Reports Server (NTRS)
Simpson, James J.; Harkins, Daniel N.
1993-01-01
Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.
Teufel, Julian; Bardins, S; Spiegel, Rainer; Kremmyda, O; Schneider, E; Strupp, M; Kalla, R
2016-01-04
Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.
Booher, Nicholas J.; Carpenter, Sara C. D.; Sebra, Robert P.; Wang, Li; Salzberg, Steven L.; Leach, Jan E.
2015-01-01
Pathogen-injected, direct transcriptional activators of host genes, TAL (transcription activator-like) effectors play determinative roles in plant diseases caused by Xanthomonas spp. A large domain of nearly identical, 33–35 aa repeats in each protein mediates DNA recognition. This modularity makes TAL effectors customizable and thus important also in biotechnology. However, the repeats render TAL effector (tal) genes nearly impossible to assemble using next-generation, short reads. Here, we demonstrate that long-read, single molecule real-time (SMRT) sequencing solves this problem. Taking an ensemble approach to first generate local, tal gene contigs, we correctly assembled de novo the genomes of two strains of the rice pathogen X. oryzae completed previously using the Sanger method and even identified errors in those references. Sequencing two more strains revealed a dynamic genome structure and a striking plasticity in tal gene content. Our results pave the way for population-level studies to inform resistance breeding, improve biotechnology and probe TAL effector evolution. PMID:27148456
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Ebrahimi, Touradj
2014-03-01
Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.
Real-time range acquisition by adaptive structured light.
Koninckx, Thomas P; Van Gool, Luc
2006-03-01
The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
NASA Astrophysics Data System (ADS)
Bergholt, Mads Sylvest; Zheng, Wei; Lin, Kan; Ho, Khek Yu; Yeoh, Khay Guan; Teh, Ming; So, Jimmy Bok Yan; Huang, Zhiwei
2012-01-01
Raman spectroscopy is a vibrational analytic technique sensitive to the changes in biomolecular composition and conformations occurring in tissue. With our most recent development of near-infrared (NIR) Raman endoscopy integrated with diagnostic algorithms, in vivo real-time Raman diagnostics has been realized under multimodal wide-field imaging (i.e., white- light reflectance (WLR), narrow-band imaging (NBI), autofluorescence imaging (AFI)) modalities. A selection of 177 patients who previously underwent Raman endoscopy (n=2510 spectra) was used to render two robust models based on partial least squares - discriminant analysis (PLS-DA) for esophageal and gastric cancer diagnosis. The Raman endoscopy technique was validated prospectively on 4 new gastric and esophageal patients for in vivo tissue diagnosis. The Raman endoscopic technique could identify esophageal cancer in vivo with a sensitivity of 88.9% (8/9) and specificity of 100.0% (11/11) and gastric cancers with a sensitivity of 77.8% (14/18) and specificity of 100.0% (13/13). This study realizes for the first time the image-guided Raman endoscopy for real-time in vivo diagnosis of malignancies in the esophagus and gastric at the biomolecular level.
Plenoptic layer-based modeling for image based rendering.
Pearson, James; Brookes, Mike; Dragotti, Pier Luigi
2013-09-01
Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.
An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces
NASA Astrophysics Data System (ADS)
Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard
2014-05-01
In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are important features to support the planning of rover paths. In addition annotations can be placed directly into the 3D scene, which also serve as landmarks to aid navigation. The presented visualization and planning tool is a valuable asset for scientific analysis of planetary mission data. It complements traditional methods by giving access to an interactive virtual 3D reconstruction, which is realistically rendered. Representative examples and further information about the interactive 3D visualization tool can be found on the FP7-SPACE Project PRoViDE web page http://www.provide-space.eu/interactive-virtual-3d-tool/. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 'PRoViDE'.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
Interactive Mapping on Virtual Terrain Models Using RIMS (Real-time, Interactive Mapping System)
NASA Astrophysics Data System (ADS)
Bernardin, T.; Cowgill, E.; Gold, R. D.; Hamann, B.; Kreylos, O.; Schmitt, A.
2006-12-01
Recent and ongoing space missions are yielding new multispectral data for the surfaces of Earth and other planets at unprecedented rates and spatial resolution. With their high spatial resolution and widespread coverage, these data have opened new frontiers in observational Earth and planetary science. But they have also precipitated an acute need for new analytical techniques. To address this problem, we have developed RIMS, a Real-time, Interactive Mapping System that allows scientists to visualize, interact with, and map directly on, three-dimensional (3D) displays of georeferenced texture data, such as multispectral satellite imagery, that is draped over a surface representation derived from digital elevation data. The system uses a quadtree-based multiresolution method to render in real time high-resolution (3 to 10 m/pixel) data over large (800 km by 800 km) spatial areas. It allows users to map inside this interactive environment by generating georeferenced and attributed vector-based elements that are draped over the topography. We explain the technique using 15 m ASTER stereo-data from Iraq, P.R. China, and other remote locations because our particular motivation is to develop a technique that permits the detailed (10 m to 1000 m) neotectonic mapping over large (100 km to 1000 km long) active fault systems that is needed to better understand active continental deformation on Earth. RIMS also includes a virtual geologic compass that allows users to fit a plane to geologic surfaces and thereby measure their orientations. It also includes tools that allow 3D surface reconstruction of deformed and partially eroded surfaces such as folded bedding planes. These georeferenced map and measurement data can be exported to, or imported from, a standard GIS (geographic information systems) file format. Our interactive, 3D visualization and analysis system is designed for those who study planetary surfaces, including neotectonic geologists, geomorphologists, marine geophysicists, and planetary scientists. The strength of our system is that it combines interactive rendering with interactive mapping and measurement of features observed in topographic and texture data. Comparison with commercially available software indicates that our system improves mapping accuracy and efficiency. More importantly, it enables Earth scientists to rapidly achieve a deeper level of understanding of remotely sensed data, as observations can be made that are not possible with existing systems.
NASA Astrophysics Data System (ADS)
Leewe, R.; Shahriari, Z.; Moallem, M.
2017-10-01
Control of the natural resonance frequency of an RF cavity is essential for accelerator structures due to their high cavity sensitivity to internal and external vibrations and the dependency of resonant frequency on temperature changes. Due to the relatively high radio frequencies involved (MHz to GHz), direct measurement of the resonant frequency for real-time control is not possible by using conventional microcontroller hardware. So far, all operational cavities are tuned using phase comparison techniques. The temperature dependent phase measurements render this technique labor and time intensive. To eliminate the phase measurement, reduce man hours and speed up cavity start up time, this paper presents a control theme that relies solely on the reflected power measurement. The control algorithm for the nonlinear system is developed through Lyapunov's method. The controller stabilizes the resonance frequency of the cavity using a nonlinear control algorithm in combination with a gradient estimation method. Experimental results of the proposed system on a test cavity show that the resonance frequency can be tuned to its optimum operating point while the start up time of a single cavity and the accompanied man hours are significantly decreased. A test result of the fully commissioned control system on one of TRIUMF's DTL tanks verifies its performance under real environmental conditions.
Extending the Pedagogy of Mobility
ERIC Educational Resources Information Center
Hedberg, John G.
2014-01-01
Direct student experience of the real organism, object, place or environment is recognised by teachers as having powerful potential for high-quality learning. Mobile technologies offer a way for students to capture their authentic learning experiences, but rendering this rich experience into explicit and highly situated learning contexts for…
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Proffitt, Dennis R.
1992-01-01
Recent developments in microelectronics have encouraged the use of 3D data bases to create compelling volumetric renderings of graphical objects. However, even with the computational capabilities of current-generation graphical systems, real-time displays of such objects are difficult, particularly when dynamic spatial transformations are involved. In this paper we discuss a type of visual stimulus (the stereokinetic effect display) that is computationally far less complex than a true three-dimensional transformation but yields an equally compelling depth impression, often perceptually indiscriminable from the true spatial transformation. Several possible applications for this technique are discussed (e.g., animating contour maps and air traffic control displays so as to evoke accurate depth percepts).
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang
2007-12-01
In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.
Up-regulation of the G3PDH 'housekeeping' gene by estrogen.
Galal, Nadia; El-Beialy, Waleed; Deyama, Yoshiaki; Yoshimura, Yoshitaka; Tei, Kanchu; Suzuki, Kuniaki; Totsuka, Yasunori
2010-01-01
Proteomic and genomic studies commonly involve the assessment of mRNA levels using reverse transcription-polymerase chain reaction (PCR) and real-time quantitative PCR. An internal standard RNA is fundamentally analyzed along with the investigated mRNA to document the specificity of the effect(s) on mRNA and to correct for inter-sample variations. In our studies implementing estrogen treatments on different cell lines, we initially used glyceraldehyde-3-phosphate dehydrogenase (G3PDH) as an internal standard. However, the results of PCR amplification demonstrated that 17β-estradiol enhanced the expression of the G3PDH gene, rendering it impossible to use G3PDH as an unbiased comparative control.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
Near Real Time Tools for ISS Plasma Science and Engineering Applications
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Willis, Emily M.; Parker, Linda Neergaard; Shim, Ja Soon; Kuznetsova, Maria M.; Pulkkinen, Antti, A.
2013-01-01
The International Space Station (ISS) program utilizes a plasma environment forecast for estimating electrical charging hazards for crews during extravehicular activity (EVA). The process uses ionospheric electron density (Ne) and temperature (Te) measurements from the ISS Floating Potential Measurement Unit (FPMU) instrument suite with the assumption that the plasma conditions will remain constant for one to fourteen days with a low probability for a space weather event which would significantly change the environment before an EVA. FPMU data is typically not available during EVA's, therefore, the most recent FPMU data available for characterizing the state of the ionosphere during EVA is typically a day or two before the start of an EVA or after the EVA has been completed. Three near real time space weather tools under development for ISS applications are described here including: (a) Ne from ground based ionosonde measurements of foF2 (b) Ne from near real time satellite radio occultation measurements of electron density profiles (c) Ne, Te from a physics based ionosphere model These applications are used to characterize the ISS space plasma environment during EVA periods when FPMU data is not available, monitor for large changes in ionosphere density that could render the ionosphere forecast and plasma hazard assessment invalid, and validate the "persistence of conditions" forecast assumption. In addition, the tools are useful for providing space environment input to science payloads on ISS and anomaly investigations during periods the FPMU is not operating.
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.
1972-01-01
Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.
Establishing the 3-D finite element solid model of femurs in partial by volume rendering.
Zhang, Yinwang; Zhong, Wuxue; Zhu, Haibo; Chen, Yun; Xu, Lingjun; Zhu, Jianmin
2013-01-01
It remains rare to report three-dimensional (3-D) finite element solid model of femurs in partial by volume rendering method, though several methods of femoral 3-D finite element modeling are already available. We aim to analyze the advantages of the modeling method by establishing the 3-D finite element solid model of femurs in partial by volume rendering. A 3-D finite element model of the normal human femurs, made up of three anatomic structures: cortical bone, cancellous bone and pulp cavity, was constructed followed by pretreatment of the CT original image. Moreover, the finite-element analysis was carried on different material properties, three types of materials given for cortical bone, six assigned for cancellous bone, and single for pulp cavity. The established 3-D finite element of femurs contains three anatomical structures: cortical bone, cancellous bone, and pulp cavity. The compressive stress primarily concentrated in the medial surfaces of femur, especially in the calcar femorale. Compared with whole modeling by volume rendering method, the 3-D finite element solid model created in partial is more real and fit for finite element analysis. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.
Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L
2011-03-01
Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.
UAS Photogrammetry for Rapid Response Characterization of Subaerial Coastal Change
NASA Astrophysics Data System (ADS)
Do, C.; Anarde, K.; Figlus, J.; Prouse, W.; Bedient, P. B.
2016-12-01
Unmanned aerial systems (UASs) provide an exciting new platform for rapid response measurement of subaerial coastal change. Here we validate the use of a coupled hobbyist UAS and optical photogrammetry framework for high-resolution mapping of portions of a low-lying barrier island along the Texas Gulf Coast. A DJI Phantom 3 Professional was used to capture 2D nadir images of the foreshore and back-beach environments containing both vegetated and non-vegetated features. The images were georeferenced using ground-truth markers surveyed via real-time kinematic (RTK) GPS and were then imported into Agisoft Photoscan, a photo-processing software, to generate 3D point clouds and digital elevation maps (DEMs). The georeferenced elevation models were then compared to RTK measurements to evaluate accuracy and precision. Thus far, DEMs derived from UAS photogrammetry show centimeter resolution for renderings of non-vegetated landforms. High-resolution renderings of vegetated and back-barrier regions have proven more difficult due to interstitial wetlands (surface reflectance) and uneven terrain for GPS backpack surveys. In addition to producing high-quality models, UAS photogrammetry has demonstrated to be more time-efficient than traditional mapping methods, making it advantageous for rapid response deployments. This study is part of a larger effort to relate field measurements of storm hydrodynamics to subaerial evidence of geomorphic change to better understand barrier island response to extreme storms.
Hybrid rendering of the chest and virtual bronchoscopy [corrected].
Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D
2000-10-30
Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.
A 3D character animation engine for multimodal interaction on mobile devices
NASA Astrophysics Data System (ADS)
Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo
2005-03-01
Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).
Digital colour management system for colour parameters reconstruction
NASA Astrophysics Data System (ADS)
Grudzinski, Karol; Lasmanowicz, Piotr; Assis, Lucas M. N.; Pawlicka, Agnieszka; Januszko, Adam
2013-10-01
Digital Colour Management System (DCMS) and its application to new adaptive camouflage system are presented in this paper. The DCMS is a digital colour rendering method which would allow for transformation of a real image into a set of colour pixels displayed on a computer monitor. Consequently, it can analyse pixels' colour which comprise images of the environment such as desert, semi-desert, jungle, farmland or rocky mountain in order to prepare an adaptive camouflage pattern most suited for the terrain. This system is described in present work as well as the use the subtractive colours mixing method to construct the real time colour changing electrochromic window/pixel (ECD) for camouflage purpose. The ECD with glass/ITO/Prussian Blue(PB)/electrolyte/CeO2-TiO2/ITO/glass configuration was assembled and characterized. The ECD switched between green and yellow after +/-1.5 V application and the colours have been controlled by Digital Colour Management System and described by CIE LAB parameters.
Use of the Uro Dyna-CT in endourology – the new frontier
Vicentini, Fabio C.; Botelho, Luiz A. A.; Braz, José L. M.; Almeida, Ernane de S.; Hisano, Marcelo
2017-01-01
ABSTRACT We describe the use of the Uro Dyna-CT, an imaging system used in the operating room that produces real-time three-dimensional (3D) imaging and cross-sectional image reconstructions similar to an intraoperative computerized tomography, during a percutaneous nephrolithotomy and a contralateral flexible ureteroscopy in a complete supine position. A 65 year-old female patient had an incomplete calyceal staghorn stone in the right kidney and a 10mm in the left one. The procedure was uneventful and the intraoperative use of the Uro Dyna-CT identified 2 residual stones that were not found by digital fluoroscopy and flexible nephroscopy at the end of surgery, helping us to render the patient stone-free in one procedure, which was confirmed by a postoperative CT scan. Prospective studies will define the real role of the Uro Dyna-CT for endourological procedures, but its use seems to be a very promising tool for improving stone free rates and decreasing auxiliary procedures, especially for complex cases. PMID:28338302
Software for Acoustic Rendering
NASA Technical Reports Server (NTRS)
Miller, Joel D.
2003-01-01
SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.
Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I
2002-06-01
The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.
Collision detection and modeling of rigid and deformable objects in laparoscopic simulator
NASA Astrophysics Data System (ADS)
Dy, Mary-Clare; Tagawa, Kazuyoshi; Tanaka, Hiromi T.; Komori, Masaru
2015-03-01
Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.
Peterka, Tom; Kooima, Robert L; Sandin, Daniel J; Johnson, Andrew; Leigh, Jason; DeFanti, Thomas A
2008-01-01
A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems, such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.
Ray-based approach to integrated 3D visual communication
NASA Astrophysics Data System (ADS)
Naemura, Takeshi; Harashima, Hiroshi
2001-02-01
For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.
An evaluation of different setups for simulating lighting characteristics
NASA Astrophysics Data System (ADS)
Salters, Bart; Murdoch, Michael; Sekulovksi, Dragan; Chen, Shih-Han; Seuntiens, Pieter
2012-03-01
The advance of technology continuously enables new luminaire designs and concepts. Evaluating such designs has traditionally been done using actual prototypes, in a real environment. The iterations needed to build, verify, and improve luminaire designs incur substantial costs and slow down the design process. A more attractive way is to evaluate designs using simulations, as they can be made cheaper and quicker for a wider variety of prototypes. However, the value of such simulations is determined by how closely they predict the outcome of actual perception experiments. In this paper, we discuss an actual perception experiment including several lighting settings in a normal office environment. The same office environment also has been modeled using different software tools, and photo-realistic renderings have been created of these models. These renderings were subsequently processed using various tonemapping operators in preparation for display. The total imaging chain can be considered a simulation setup, and we have executed several perception experiments on different setups. Our real interest is in finding which imaging chain gives us the best result, or in other words, which of them yields the closest match between virtual and real experiment. To answer this question, first of all an answer has to be found to the question, "which simulation setup matches the real world best?" As there is no unique, widely accepted measure to describe the performance of a certain setup, we consider a number of options and discuss the reasoning behind them along with their advantages and disadvantages.
Autostereoscopic image creation by hyperview matrix controlled single pixel rendering
NASA Astrophysics Data System (ADS)
Grasnick, Armin
2017-06-01
Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.
Multi-scale Visualization of Molecular Architecture Using Real-Time Ambient Occlusion in Sculptor.
Wahle, Manuel; Wriggers, Willy
2015-10-01
The modeling of large biomolecular assemblies relies on an efficient rendering of their hierarchical architecture across a wide range of spatial level of detail. We describe a paradigm shift currently under way in computer graphics towards the use of more realistic global illumination models, and we apply the so-called ambient occlusion approach to our open-source multi-scale modeling program, Sculptor. While there are many other higher quality global illumination approaches going all the way up to full GPU-accelerated ray tracing, they do not provide size-specificity of the features they shade. Ambient occlusion is an aspect of global lighting that offers great visual benefits and powerful user customization. By estimating how other molecular shape features affect the reception of light at some surface point, it effectively simulates indirect shadowing. This effect occurs between molecular surfaces that are close to each other, or in pockets such as protein or ligand binding sites. By adding ambient occlusion, large macromolecular systems look much more natural, and the perception of characteristic surface features is strongly enhanced. In this work, we present a real-time implementation of screen space ambient occlusion that delivers realistic cues about tunable spatial scale characteristics of macromolecular architecture. Heretofore, the visualization of large biomolecular systems, comprising e.g. hundreds of thousands of atoms or Mega-Dalton size electron microscopy maps, did not take into account the length scales of interest or the spatial resolution of the data. Our approach has been uniquely customized with shading that is tuned for pockets and cavities of a user-defined size, making it useful for visualizing molecular features at multiple scales of interest. This is a feature that none of the conventional ambient occlusion approaches provide. Actual Sculptor screen shots illustrate how our implementation supports the size-dependent rendering of molecular surface features.
Beigi, Parmida; Malenfant, Paul; Rasoulian, Abtin; Rohling, Robert; Dube, Alison; Gunka, Vit
2017-01-01
Current 2-D ultrasound technology is unable to perform a midline neuraxial needle insertion under real-time ultrasound guidance using a standard needle and without an assistant. The aim of the work described here was to determine the feasibility of a new technology providing such capability, starting with a study evaluating the selected puncture site. A novel 3-D ultrasound imaging technique was designed using thick-slice rendering in conjunction with a custom needle guide (3DUS + Epiguide). A clinical feasibility study evaluated the ability of 3DUS + Epiguide to identify the epidural needle puncture site for a midline insertion in the lumbar spine. We hypothesized that (i) the puncture site identified by 3DUS + Epiguide was within a 5-mm radius from the site chosen by standard palpation, and (ii) the difference between the two puncture sites was not correlated to the patient characteristics age, weight, height, body mass index and gestational age. The mean (±standard deviation) distances between puncture sites determined by 3DUS + Epiguide and palpation were 3.1 (±1.7) mm and 2.8 (±1.3) mm, for the L2-3 and L3-4 interspaces of 20 patients, respectively. Distances were comparable to intra-observer variability, indicating the potential for a thick-slice rendering of 3-D ultrasound along the Epiguide trajectory to select the puncture site of a midline neuraxial needle insertion. The long-term potential benefits of this system include increased efficiency and use of anesthesia, and a reduction in the frequency and severity of the complications from incorrect needle insertions. Epidural success in the most difficult cases (e.g., the obese) will be the focus of future work. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
48-spot single-molecule FRET setup with periodic acceptor excitation
NASA Astrophysics Data System (ADS)
Ingargiola, Antonino; Segal, Maya; Gulinatti, Angelo; Rech, Ivan; Labanca, Ivan; Maccagnani, Piera; Ghioni, Massimo; Weiss, Shimon; Michalet, Xavier
2018-03-01
Single-molecule Förster resonance energy transfer (smFRET) allows measuring distances between donor and acceptor fluorophores on the 3-10 nm range. Solution-based smFRET allows measurement of binding-unbinding events or conformational changes of dye-labeled biomolecules without ensemble averaging and free from surface perturbations. When employing dual (or multi) laser excitation, smFRET allows resolving the number of fluorescent labels on each molecule, greatly enhancing the ability to study heterogeneous samples. A major drawback to solution-based smFRET is the low throughput, which renders repetitive measurements expensive and hinders the ability to study kinetic phenomena in real-time. Here we demonstrate a high-throughput smFRET system that multiplexes acquisition by using 48 excitation spots and two 48-pixel single-photon avalanche diode array detectors. The system employs two excitation lasers allowing separation of species with one or two active fluorophores. The performance of the system is demonstrated on a set of doubly labeled double-stranded DNA oligonucleotides with different distances between donor and acceptor dyes along the DNA duplex. We show that the acquisition time for accurate subpopulation identification is reduced from several minutes to seconds, opening the way to high-throughput screening applications and real-time kinetics studies of enzymatic reactions such as DNA transcription by bacterial RNA polymerase.
Real-time mandibular angle reduction surgical simulation with haptic rendering.
Wang, Qiong; Chen, Hui; Wu, Wen; Jin, Hai-Yang; Heng, Pheng-Ann
2012-11-01
Mandibular angle reduction is a popular and efficient procedure widely used to alter the facial contour. The primary surgical instruments, the reciprocating saw and the round burr, employed in the surgery have a common feature: operating at a high-speed. Generally, inexperienced surgeons need a long-time practice to learn how to minimize the risks caused by the uncontrolled contacts and cutting motions in manipulation of instruments with high-speed reciprocation or rotation. A virtual reality-based surgical simulator for the mandibular angle reduction was designed and implemented on a CUDA-based platform in this paper. High-fidelity visual and haptic feedbacks are provided to enhance the perception in a realistic virtual surgical environment. The impulse-based haptic models were employed to simulate the contact forces and torques on the instruments. It provides convincing haptic sensation for surgeons to control the instruments under different reciprocation or rotation velocities. The real-time methods for bone removal and reconstruction during surgical procedures have been proposed to support realistic visual feedbacks. The simulated contact forces were verified by comparing against the actual force data measured through the constructed mechanical platform. An empirical study based on the patient-specific data was conducted to evaluate the ability of the proposed system in training surgeons with various experiences. The results confirm the validity of our simulator.
Interventional radiology virtual simulator for liver biopsy.
Villard, P F; Vidal, F P; ap Cenydd, L; Holbrey, R; Pisharody, S; Johnson, S; Bulpitt, A; John, N W; Bello, F; Gould, D
2014-03-01
Training in Interventional Radiology currently uses the apprenticeship model, where clinical and technical skills of invasive procedures are learnt during practice in patients. This apprenticeship training method is increasingly limited by regulatory restrictions on working hours, concerns over patient risk through trainees' inexperience and the variable exposure to case mix and emergencies during training. To address this, we have developed a computer-based simulation of visceral needle puncture procedures. A real-time framework has been built that includes: segmentation, physically based modelling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities, involving computer scientists, clinicians, clinical engineers and occupational psychologists. The technical implementation of the framework is a robust and real-time simulation environment combining a physical platform and an immersive computerized virtual environment. The face, content and construct validation have been previously assessed, showing the reliability and effectiveness of this framework, as well as its potential for teaching visceral needle puncture. A simulator for ultrasound-guided liver biopsy has been developed. It includes functionalities and metrics extracted from cognitive task analysis. This framework can be useful during training, particularly given the known difficulties in gaining significant practice of core skills in patients.
Acquisition of Real-Time Operation Analytics for an Automated Serial Sectioning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madison, Jonathan D.; Underwood, O. D.; Poulter, Gregory A.
Mechanical serial sectioning is a highly repetitive technique employed in metallography for the rendering of 3D reconstructions of microstructure. While alternate techniques such as ultrasonic detection, micro-computed tomography, and focused ion beam milling have progressed much in recent years, few alternatives provide equivalent opportunities for comparatively high resolutions over significantly sized cross-sectional areas and volumes. To that end, the introduction of automated serial sectioning systems has greatly heightened repeatability and increased data collection rates while diminishing opportunity for mishandling and other user-introduced errors. Unfortunately, even among current, state-of-the-art automated serial sectioning systems, challenges in data collection have not been fullymore » eradicated. Therefore, this paper highlights two specific advances to assist in this area; a non-contact laser triangulation method for assessment of material removal rates and a newly developed graphical user interface providing real-time monitoring of experimental progress. Furthermore, both are shown to be helpful in the rapid identification of anomalies and interruptions, while also providing comparable and less error-prone measures of removal rate over the course of these long-term, challenging, and innately destructive characterization experiments.« less
CT imaging with a mobile C-arm prototype
NASA Astrophysics Data System (ADS)
Cheryauka, Arvi; Tubbs, David; Langille, Vinton; Kalya, Prabhanjana; Smith, Brady; Cherone, Rocco
2008-03-01
Mobile X-ray imagery is an omnipresent tool in conventional musculoskeletal and soft tissue applications. The next generation of mobile C-arm systems can provide clinicians of minimally-invasive surgery and pain management procedures with both real-time high-resolution fluoroscopy and intra-operative CT imaging modalities. In this study, we research two C-arm CT experimental system configurations and evaluate their imaging capabilities. In a non-destructive evaluation configuration, the X-ray Tube - Detector assembly is stationary while an imaging object is placed on a rotating table. In a medical imaging configuration, the C-arm gantry moves around the patient and the table. In our research setting, we connect the participating devices through a Mobile X-Ray Imaging Environment known as MOXIE. MOXIE is a set of software applications for internal research at GE Healthcare - Surgery and used to examine imaging performance of experimental systems. Anthropomorphic phantom volume renderings and orthogonal slices of reconstructed images are obtained and displayed. The experimental C-arm CT results show CT-like image quality that may be suitable for interventional procedures, real-time data management, and, therefore, have great potential for effective use on the clinical floor.
Petroleomics by Direct Analysis in Real Time-Mass Spectrometry.
Romão, Wanderson; Tose, Lilian V; Vaz, Boniek G; Sama, Sara G; Lobinski, Ryszard; Giusti, Pierre; Carrier, Hervé; Bouyssiere, Brice
2016-01-01
The analysis of crude oil and its fractions by applying ambient ionization techniques remains underexplored in mass spectrometry (MS). Direct analysis in real time (DART) in the positive-ion mode was coupled to a linear quadrupole ion trap Orbitrap mass spectrometer (LTQ Orbitrap) to analyze crude oil, paraffin samples, and porphyrin standard compounds. The ionization parameters of DART-MS were optimized for crude oil analysis. DART-MS rendered the optimum conditions of the operation using paper as the substrate, T = 400°C, helium as the carrier gas, and a sample concentration ≥6 mg mL(-1). In the crude oils analysis, the DART(+)-Orbitrap mass spectra detected the typical N, NO, and O-containing compounds. In the paraffin samples, oxidized hydrocarbon species (Ox classes, where x = 1-4) with double-bond equivalent of 1-4 were detected, and their structures and connectivity were confirmed by collision-induced dissociation (CID) experiments. DART(+)-MS has identified the porphyrin standard compounds as [M + H](+) ions of m/z 615.2502 and 680.1763, where M = C44H30N4 and C44H28N4OV, respectively, based on the formula assignment and by phenyl losses observed on CID experiments.
Acquisition of Real-Time Operation Analytics for an Automated Serial Sectioning System
Madison, Jonathan D.; Underwood, O. D.; Poulter, Gregory A.; ...
2017-03-22
Mechanical serial sectioning is a highly repetitive technique employed in metallography for the rendering of 3D reconstructions of microstructure. While alternate techniques such as ultrasonic detection, micro-computed tomography, and focused ion beam milling have progressed much in recent years, few alternatives provide equivalent opportunities for comparatively high resolutions over significantly sized cross-sectional areas and volumes. To that end, the introduction of automated serial sectioning systems has greatly heightened repeatability and increased data collection rates while diminishing opportunity for mishandling and other user-introduced errors. Unfortunately, even among current, state-of-the-art automated serial sectioning systems, challenges in data collection have not been fullymore » eradicated. Therefore, this paper highlights two specific advances to assist in this area; a non-contact laser triangulation method for assessment of material removal rates and a newly developed graphical user interface providing real-time monitoring of experimental progress. Furthermore, both are shown to be helpful in the rapid identification of anomalies and interruptions, while also providing comparable and less error-prone measures of removal rate over the course of these long-term, challenging, and innately destructive characterization experiments.« less
Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments.
Oh, Soo Youn; Bailenson, Jeremy; Krämer, Nicole; Li, Benjamin
2016-01-01
Previous studies demonstrated the positive effects of smiling on interpersonal outcomes. The present research examined if enhancing one's smile in a virtual environment could lead to a more positive communication experience. In the current study, participants' facial expressions were tracked and mapped on a digital avatar during a real-time dyadic conversation. The avatar's smile was rendered such that it was either a slightly enhanced version or a veridical version of the participant's actual smile. Linguistic analyses using the Linguistic Inquiry Word Count (LIWC) revealed that participants who communicated with each other via avatars that exhibited enhanced smiles used more positive words to describe their interaction experience compared to those who communicated via avatars that displayed smiling behavior reflecting the participants' actual smiles. In addition, self-report measures showed that participants in the 'enhanced smile' condition felt more positive affect after the conversation and experienced stronger social presence compared to the 'normal smile' condition. These results are particularly striking when considering the fact that most participants (>90%) were unable to detect the smiling manipulation. This is the first study to demonstrate the positive effects of transforming unacquainted individuals' actual smiling behavior during a real-time avatar-networked conversation.
Development and validation of real-time simulation of X-ray imaging with respiratory motion.
Vidal, Franck P; Villard, Pierre-Frédéric
2016-04-01
We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.
[3-D echocardiography: new developments and future prospects].
Müller, Silvana; Bartel, Thomas; Pachinger, Otmar; Erbel, Raimund
2002-05-01
Due to limitations in transthoracic and occasionally transesophageal 2-D echocardiography with respect to volumetric analysis and morphologic and functional assessment in patients with congenital malformations and valvular heart disease, additional diagnostic tools have been established. In parallel with the rapid evolution in computer technology, 3-D echocardiography has grown into a well-developed technique, such as volume-rendered 3-D reconstruction, capable of displaying dynamic morphology depicting depth of the structures, their attachment, and spatial relation to the surrounding tissue. Nevertheless, the complexity of data acquisition and data processing required for adequate dynamic 3-D echocardiographic imaging and volumetric analysis does not allow to use this approach routinely. The commonly used dynamic 3-D echocardiography means off-line computer-assisted image reconstruction from a series of cross-sectional echocardiographic images using currently available transesophageal and transthoracic transducers. Alternatively, real-time 3-D echocardiography based on novel matrix, phased-array transducer technology has been introduced. Although this technique can be easily combined with any routine examination, its clinical use is limited because of a lower image quality in comparison with dynamic 3-D echocardiography. Up to now, there is no transesophageal approach available using real-time 3-D echocardiography. Recently, dynamic 3-D echocardiographic technique has matured noticeably. Beside the well-known sequential scanning, which is characterized by a fixed probe and patient in space and predetermined motion of the transducer, the freehand scanning using an electromagnetic location system has found its way to clinical environment. The main advantage of this technique is that the transducer can be freely moved by the examiner and, thus, the data set acquired within a routine examination. Also 3-D rendering and display have been developed further. In this respect, especially the "real-time rendering mode" allowing the reconstructed 3-D image to be animated and moved in space and to look at it from different perspectives has gained increasing acceptance. In valvular heart disease, reconstructive surgical treatment is aspired. 3-D echocardiographic imaging is the only technique providing "surgical views" prior to opening the heart. It is capable of distinguishing particular destructive substructures of the valves and the valvular apparatus. Especially in mitral valvular reconstruction, it is of clinical importance to achieve optimal surgical results. With respect to volumetric and mass analysis, 3-D echocardiography is more accurate and reproducible in comparison with conventional 2-D analysis. It provides data independent of geometric assumptions, what may considerably influence the results in the presence of wall motion abnormalities, especially in aneurysmatic ventricles. Volumetric analysis of the aneurysmal portion may also be helpful prior to surgical resection. 3-D echocardiography can also be recommended as a valuable additional approach to atrial septal defect (ASD), corrected transposition of the great arteries, cor triatriatum, and, within limits, to ventricular septal defect (VSD) as well. Especially with respect to ASD and VSD, the potential significance of 3-D echocardiography prior to device closure is emphasized. At present, its additional information in decision-making and the increasing number of clinical cases that can be addressed and answered already justify the clinical use of this technique.
The Potential for Scientific Collaboration in Virtual Ecosystems
ERIC Educational Resources Information Center
Magerko, Brian
2010-01-01
This article explores the potential benefits of creating "virtual ecosystems" from real-world data. These ecosystems are intended to be realistic virtual representations of environments that may be costly or difficult to access in person. They can be constructed as 3D worlds rendered from stereo video data, augmented with scientific data, and then…
Assessing Students' Proficiency in Math and Science
ERIC Educational Resources Information Center
Judd, Thomas P.; Keith, Bruce
2007-01-01
The U.S. Military Academy (USMA) at West Point is responsible for developing in its graduates literacy in the sciences that renders them capable of solving complex real-world problems. Throughout their careers as officers in the military, graduates will be called upon to view the physical world in a disciplined and objective manner, with an…
Model-based registration of multi-rigid-body for augmented reality
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro
2009-02-01
Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.
Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp.
Bayr, S; Ojanperä, M; Kaparaju, P; Rintala, J
2014-10-01
In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55°C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH4-N and/or free NH3) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m(3)d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm(3)/kg VS(fed). On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500-680 dm(3)/kg VS(fed)). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates
NASA Astrophysics Data System (ADS)
Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.
2002-03-01
A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.
A Feature-adaptive Subdivision Method for Real-time 3D Reconstruction of Repeated Topology Surfaces
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Sun, Honghai
2017-03-01
It's well known that rendering for a large number of triangles with GPU hardware tessellation has made great progress. However, due to the fixed nature of GPU pipeline, many off-line methods that perform well can not meet the on-line requirements. In this paper, an optimized Feature-adaptive subdivision method is proposed, which is more suitable for reconstructing surfaces with repeated cusps or creases. An Octree primitive is established in irregular regions where there are the same sharp vertices or creases, this method can find the neighbor geometry information quickly. Because of having the same topology structure between Octree primitive and feature region, the Octree feature points can match the arbitrary vertices in feature region more precisely. In the meanwhile, the patches is re-encoded in the Octree primitive by using the breadth-first strategy, resulting in a meta-table which allows for real-time reconstruction by GPU hardware tessellation unit. There is only one feature region needed to be calculated under Octree primitive, other regions with the same repeated feature generate their own meta-table directly, the reconstruction time is saved greatly for this step. With regard to the meshes having a large number of repeated topology feature, our algorithm improves the subdivision time by 17.575% and increases the average frame drawing time by 0.2373 ms compared to the traditional FAS (Feature-adaptive Subdivision), at the same time the model can be reconstructed in a watertight manner.
Illustrative visualization of 3D city models
NASA Astrophysics Data System (ADS)
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
2005-03-01
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
Single-shot stand-off chemical identification of powders using random Raman lasing
Hokr, Brett H.; Bixler, Joel N.; Noojin, Gary D.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.; Scully, Marlan O.
2014-01-01
The task of identifying explosives, hazardous chemicals, and biological materials from a safe distance is the subject we consider. Much of the prior work on stand-off spectroscopy using light has been devoted to generating a backward-propagating beam of light that can be used drive further spectroscopic processes. The discovery of random lasing and, more recently, random Raman lasing provide a mechanism for remotely generating copious amounts of chemically specific Raman scattered light. The bright nature of random Raman lasing renders directionality unnecessary, allowing for the detection and identification of chemicals from large distances in real time. In this article, the single-shot remote identification of chemicals at kilometer-scale distances is experimentally demonstrated using random Raman lasing. PMID:25114231
NASA Technical Reports Server (NTRS)
Easley, Wesley C.; Carter, Donald; Mcluer, David G.
1994-01-01
An amateur packet radio system operating in the very high frequency (VHF) range has been implemented in the Transport Systems Research Vehicle at the NASA Langley Research Center to provide an economical, bidirectional, real-time, ground-to-air data link. The packet system has been used to support flight research involving air traffic control (ATC), differential global positioning systems (DGPS), and windshear terminal doppler weather radar (TDWR). A data maximum rate of 2400 baud was used. Operational reliability of the packet system has been very good. Also, its versatility permits numerous specific configurations. These features, plus its low cost, have rendered it very satisfactory for support of data link flight experiments that do not require high data transfer rates.
Motion parallax in immersive cylindrical display systems
NASA Astrophysics Data System (ADS)
Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.
2012-03-01
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Real-time 3D image reconstruction guidance in liver resection surgery.
Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-04-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.
Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu
1999-01-01
We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.
NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.
Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul
2014-09-30
As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.
Glnemo2: Interactive Visualization 3D Program
NASA Astrophysics Data System (ADS)
Lambert, Jean-Charles
2011-10-01
Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.
Real-time and interactive virtual Doppler ultrasound
NASA Astrophysics Data System (ADS)
Hirji, Samira; Downey, Donal B.; Holdsworth, David W.; Steinman, David A.
2005-04-01
This paper describes our "virtual" Doppler ultrasound (DUS) system, in which colour DUS (CDUS) images and DUS spectrograms are generated on-the-fly and displayed in real-time in response to position and orientation cues provided by a magnetically tracked handheld probe. As the presence of complex flow often confounds the interpretation of Doppler ultrasound data, this system will serve to be a fundamental tool for training sonographers and gaining insight into the relationship between ambiguous DUS images and complex blood flow dynamics. Recently, we demonstrated that DUS spectra could be realistically simulated in real-time, by coupling a semi-empirical model of the DUS physics to a 3-D computational fluid dynamics (CFD) model of a clinically relevant flow field. Our system is an evolution of this approach where a motion-tracking device is used to continuously update the origin and orientation of a slice passing through a CFD model of a stenosed carotid bifurcation. After calibrating our CFD model onto a physical representation of a human neck, virtual CDUS images from an instantaneous slice are then displayed at a rate of approximately 15 Hz by simulating, on-the-fly, an array of DUS spectra and colour coding the resulting spectral mean velocity using a traditional Doppler colour scale. Mimicking a clinical examination, the operator can freeze the CDUS image on-screen, and a spectrogram corresponding to the selected sample volume location is rendered at a higher frame rate of at least 30 Hz. All this is achieved using an inexpensive desktop workstation and commodity graphics card.
NASA Astrophysics Data System (ADS)
Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.
2003-08-01
In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.
EHDViz: clinical dashboard development using open-source technologies
Badgeley, Marcus A; Shameer, Khader; Glicksberg, Benjamin S; Tomlinson, Max S; Levin, Matthew A; McCormick, Patrick J; Kasarskis, Andrew; Reich, David L; Dudley, Joel T
2016-01-01
Objective To design, develop and prototype clinical dashboards to integrate high-frequency health and wellness data streams using interactive and real-time data visualisation and analytics modalities. Materials and methods We developed a clinical dashboard development framework called electronic healthcare data visualization (EHDViz) toolkit for generating web-based, real-time clinical dashboards for visualising heterogeneous biomedical, healthcare and wellness data. The EHDViz is an extensible toolkit that uses R packages for data management, normalisation and producing high-quality visualisations over the web using R/Shiny web server architecture. We have developed use cases to illustrate utility of EHDViz in different scenarios of clinical and wellness setting as a visualisation aid for improving healthcare delivery. Results Using EHDViz, we prototyped clinical dashboards to demonstrate the contextual versatility of EHDViz toolkit. An outpatient cohort was used to visualise population health management tasks (n=14 221), and an inpatient cohort was used to visualise real-time acuity risk in a clinical unit (n=445), and a quantified-self example using wellness data from a fitness activity monitor worn by a single individual was also discussed (n-of-1). The back-end system retrieves relevant data from data source, populates the main panel of the application and integrates user-defined data features in real-time and renders output using modern web browsers. The visualisation elements can be customised using health features, disease names, procedure names or medical codes to populate the visualisations. The source code of EHDViz and various prototypes developed using EHDViz are available in the public domain at http://ehdviz.dudleylab.org. Conclusions Collaborative data visualisations, wellness trend predictions, risk estimation, proactive acuity status monitoring and knowledge of complex disease indicators are essential components of implementing data-driven precision medicine. As an open-source visualisation framework capable of integrating health assessment, EHDViz aims to be a valuable toolkit for rapid design, development and implementation of scalable clinical data visualisation dashboards. PMID:27013597
Using the PhysX engine for physics-based virtual surgery with force feedback.
Maciel, Anderson; Halic, Tansel; Lu, Zhonghua; Nedel, Luciana P; De, Suvranu
2009-09-01
The development of modern surgical simulators is highly challenging, as they must support complex simulation environments. The demand for higher realism in such simulators has driven researchers to adopt physics-based models, which are computationally very demanding. This poses a major problem, since real-time interactions must permit graphical updates of 30 Hz and a much higher rate of 1 kHz for force feedback (haptics). Recently several physics engines have been developed which offer multi-physics simulation capabilities, including rigid and deformable bodies, cloth and fluids. While such physics engines provide unique opportunities for the development of surgical simulators, their higher latencies, compared to what is necessary for real-time graphics and haptics, offer significant barriers to their use in interactive simulation environments. In this work, we propose solutions to this problem and demonstrate how a multimodal surgical simulation environment may be developed based on NVIDIA's PhysX physics library. Hence, models that are undergoing relatively low-frequency updates in PhysX can exist in an environment that demands much higher frequency updates for haptics. We use a collision handling layer to interface between the physical response provided by PhysX and the haptic rendering device to provide both real-time tissue response and force feedback. Our simulator integrates a bimanual haptic interface for force feedback and per-pixel shaders for graphics realism in real time. To demonstrate the effectiveness of our approach, we present the simulation of the laparoscopic adjustable gastric banding (LAGB) procedure as a case study. To develop complex and realistic surgical trainers with realistic organ geometries and tissue properties demands stable physics-based deformation methods, which are not always compatible with the interaction level required for such trainers. We have shown that combining different modelling strategies for behaviour, collision and graphics is possible and desirable. Such multimodal environments enable suitable rates to simulate the major steps of the LAGB procedure.
NASA Astrophysics Data System (ADS)
Lammers, M.
2016-12-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
AstroCloud: An Agile platform for data visualization and specific analyzes in 2D and 3D
NASA Astrophysics Data System (ADS)
Molina, F. Z.; Salgado, R.; Bergel, A.; Infante, A.
2017-07-01
Nowadays, astronomers commonly run their own tools, or distributed computational packages, for data analysis and then visualizing the results with generic applications. This chain of processes comes at high cost: (a) analyses are manually applied, they are therefore difficult to be automatized, and (b) data have to be serialized, thus increasing the cost of parsing and saving intermediary data. We are developing AstroCloud, an agile visualization multipurpose platform intended for specific analyses of astronomical images (https://astrocloudy.wordpress.com). This platform incorporates domain-specific languages which make it easily extensible. AstroCloud supports customized plug-ins, which translate into time reduction on data analysis. Moreover, it also supports 2D and 3D rendering, including interactive features in real time. AstroCloud is under development, we are currently implementing different choices for data reduction and physical analyzes.
MEMS- and LC-adaptive optics at the Naval Research Laboratory
NASA Astrophysics Data System (ADS)
Restaino, S. R.; Wilcox, C. C.; Martinez, T.; Andrews, J. R.; Santiago, F.; Payne, D. M.
2012-06-01
Adaptive Optics (AO) is an ensemble of techniques that aims at the remedial of the deleterious effects that the Earth's turbulent atmosphere induces on both imagery and signal gathering in real time. It has been over four decades since the first AO system was developed and tested. During this time important technological advances have changed profoundly the way that we think and develop AO systems. The use of Micro-Electro-Mechanical-Systems (MEMS) devices and Liquid Crystal Devices (LCD) has revolutionized these technologies making possible to go from very expensive, very large and power consuming systems to very compact and inexpensive systems. These changes have rendered AO systems useful and applicable in other fields ranging from medical imaging to industry. In this paper we will review the research efforts at the Naval research Laboratory (NRL) to develop AO systems based on both MEMs and LCD in order to produce more compact and light weight AO systems.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
A Real-Time Microwave Camera at K-Band (24 GHz)
NASA Technical Reports Server (NTRS)
Ghasr, M. T.; Abou-Khousa, M.; Baumgartner, M. A.; Zoughi, R.
2009-01-01
Nondestructive testing (NDT) community has been moving towards effective and robust inspection systems that can provide real-time information about materials, geometrical, structural and mechanical characteristics of composite materials/structures. Moreover, in many applications it is desired to have an image of the structure, after which the information contained in the image is correlated to the above characteristics. Microwave signals penetrate inside of dielectric composite structures and their interaction with the interior of the structure renders critical information for NDT purposes. Subsequently, this information (i.e., magnitude and phase or reflected signal) may be used to produce an image of the interior of the structure revealing potential flaws or anomalies. Image processing and reconstruction techniques may also be incorporated to produce high-resolution images (i.e., synthetic-aperture, back-propagation, etc.). There are several different approaches for designing areal-time microwave camera system. One approach is based on modulated scatterer technique (MST), which is used to tag scattered electric field in a discrete two-dimensional (2D) spatial domain (e.g. a retina) resulting in the 2D magnitude and phase distribution of the scattered electric field which is required for producing an image of a material or structure under inspection. The ability to rapidly modulate resonant slot antennas in such a retina along with using receivers with fast responses provide for real-time image production capability. Design issue and criteria become more challenging at higher frequencies and for a relatively large retina size. This paper presents the basic design and challenges for a microwave camera with a retina size of 6" by 6" operating at a frequency of 24 GHz.
Learning Logic in the Global Arena through Telecommunication.
ERIC Educational Resources Information Center
Lanham, Marion; Cowan, Marlene C.
The Information Age is a new cultural era in which the dominant resource is information. For educators attempting to prepare the increasingly diverse student body for the 1990's and beyond, the scope, magnitude, and constantly changing nature of this Information Age renders the task monumental. In an effort to create a real-world context for a…
26 CFR 1.1402(a)-4 - Rentals from real estate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... emergency, discuss with H some phase of a crop production activity. In effect, H has complete charge of the... of the cost of the seed and fertilizer and is charged for the cost of materials purchased by H to... the lease. (2) Services rendered for occupants. Payments for the use or occupancy of rooms or other...
Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.
2016-01-01
Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616
A virtual reality based simulator for learning nasogastric tube placement.
Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong
2015-02-01
Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement. Copyright © 2014 Elsevier Ltd. All rights reserved.
Real-time animation software for customized training to use motor prosthetic systems.
Davoodi, Rahman; Loeb, Gerald E
2012-03-01
Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition
2017-01-01
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.
Choi, Hyo-Rim; Kim, TaeYong
2017-08-17
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.
On-chip manipulation of single microparticles, cells, and organisms using surface acoustic waves.
Ding, Xiaoyun; Lin, Sz-Chin Steven; Kiraly, Brian; Yue, Hongjun; Li, Sixing; Chiang, I-Kao; Shi, Jinjie; Benkovic, Stephen J; Huang, Tony Jun
2012-07-10
Techniques that can dexterously manipulate single particles, cells, and organisms are invaluable for many applications in biology, chemistry, engineering, and physics. Here, we demonstrate standing surface acoustic wave based "acoustic tweezers" that can trap and manipulate single microparticles, cells, and entire organisms (i.e., Caenorhabditis elegans) in a single-layer microfluidic chip. Our acoustic tweezers utilize the wide resonance band of chirped interdigital transducers to achieve real-time control of a standing surface acoustic wave field, which enables flexible manipulation of most known microparticles. The power density required by our acoustic device is significantly lower than its optical counterparts (10,000,000 times less than optical tweezers and 100 times less than optoelectronic tweezers), which renders the technique more biocompatible and amenable to miniaturization. Cell-viability tests were conducted to verify the tweezers' compatibility with biological objects. With its advantages in biocompatibility, miniaturization, and versatility, the acoustic tweezers presented here will become a powerful tool for many disciplines of science and engineering.
40 CFR 164.91 - Accelerated decision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... decision. (a) General. The Administrative Law Judge, in his discretion, may at any time render an... matter of law; or (8) Such other and further reasons as are just. (b) Effect. A decision rendered under...
Modeling the forces of cutting with scissors.
Mahvash, Mohsen; Voo, Liming M; Kim, Diana; Jeung, Kristin; Wainer, Joshua; Okamura, Allison M
2008-03-01
Modeling forces applied to scissors during cutting of biological materials is useful for surgical simulation. Previous approaches to haptic display of scissor cutting are based on recording and replaying measured data. This paper presents an analytical model based on the concepts of contact mechanics and fracture mechanics to calculate forces applied to scissors during cutting of a slab of material. The model considers the process of cutting as a sequence of deformation and fracture phases. During deformation phases, forces applied to the scissors are calculated from a torque-angle response model synthesized from measurement data multiplied by a ratio that depends on the position of the cutting crack edge and the curve of the blades. Using the principle of conservation of energy, the forces of fracture are related to the fracture toughness of the material and the geometry of the blades of the scissors. The forces applied to scissors generally include high-frequency fluctuations. We show that the analytical model accurately predicts the average applied force. The cutting model is computationally efficient, so it can be used for real-time computations such as haptic rendering. Experimental results from cutting samples of paper, plastic, cloth, and chicken skin confirm the model, and the model is rendered in a haptic virtual environment.
NASA Astrophysics Data System (ADS)
Ballora, Mark; Hall, David L.
2010-04-01
Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.
Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.
Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel
2017-07-28
New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
UWGSP4: an imaging and graphics superworkstation and its medical applications
NASA Astrophysics Data System (ADS)
Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin
1992-05-01
UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.
NASA Astrophysics Data System (ADS)
Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya
2016-09-01
In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.
Sun, Bo; Sunkavalli, Kalyan; Ramamoorthi, Ravi; Belhumeur, Peter N; Nayar, Shree K
2007-01-01
The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However, none of the existing BRDF models and databases take time variation into consideration; they represent the appearance of a material at a single time instance. In this paper, we address the acquisition, analysis, modeling, and rendering of a wide range of time-varying BRDFs (TVBRDFs). We have developed an acquisition system that is capable of sampling a material's BRDF at multiple time instances, with each time sample acquired within 36 sec. We have used this acquisition system to measure the BRDFs of a wide range of time-varying phenomena, which include the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters' variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic TVBRDF models. The analytic TVBRDF models enable us to apply effects such as paint drying and dust accumulation to arbitrary surfaces and novel materials.
Tangible display systems: direct interfaces for computer-based studies of surface appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.; Ferwerda, James A.
2010-02-01
When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.
Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.
Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong
2006-04-01
This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.
The Design and Implementation of Virtual Roaming in Yunnan Diqing Tibetan traditional Villages
NASA Astrophysics Data System (ADS)
Cao, Lucheng; Xu, Wu; Li, Ke; Jin, Chunjie; Su, Ying; He, Jin
2018-06-01
Traditional residence is the continuation of intangible cultural heritage and the primitive soil for development. At present, the protection and inheritance of traditional villages have been impacted by the process of modernization, and the phenomenon of assimilation is very serious. This article takes the above questions as the breakthrough point, and then analyzes why and how to use virtual reality technology to better solve the above problems, and take the Yunnan Diqing Tibetan traditional dwellings as the specific example to explore. First, using VR technology, with real images and sound, the paper simulate a near real virtual world. Secondly, we collect a large amount of real image information, and make the visualization model of building by using 3DMAX software platform, UV Mapping and Rendering optimization. Finally, the Vizard virtual reality development platform was used to establish the roaming system and realize the virtual interaction. The roaming system was posted online so that overcome the disadvantages of not intuitive and low capability of interaction, and these new ideas can give a whole new meaning in the protection projects of the cultural relic buildings. At the same time, visitors could enjoy the "Dian-style" architectural style and cultural connotation of dwelling house in Diqing Yunnan.
Real-time evolution of a large-scale relativistic jet
NASA Astrophysics Data System (ADS)
Martí, Josep; Luque-Escamilla, Pedro L.; Romero, Gustavo E.; Sánchez-Sutil, Juan R.; Muñoz-Arjonilla, Álvaro J.
2015-06-01
Context. Astrophysical jets are ubiquitous in the Universe on all scales, but their large-scale dynamics and evolution in time are hard to observe since they usually develop at a very slow pace. Aims: We aim to obtain the first observational proof of the expected large-scale evolution and interaction with the environment in an astrophysical jet. Only jets from microquasars offer a chance to witness the real-time, full-jet evolution within a human lifetime, since they combine a "short", few parsec length with relativistic velocities. Methods: The methodology of this work is based on a systematic recalibraton of interferometric radio observations of microquasars available in public archives. In particular, radio observations of the microquasar GRS 1758-258 over less than two decades have provided the most striking results. Results: Significant morphological variations in the extended jet structure of GRS 1758-258 are reported here that were previously missed. Its northern radio lobe underwent a major morphological variation that rendered the hotspot undetectable in 2001 and reappeared again in the following years. The reported changes confirm the Galactic nature of the source. We tentatively interpret them in terms of the growth of instabilities in the jet flow. There is also evidence of surrounding cocoon. These results can provide a testbed for models accounting for the evolution of jets and their interaction with the environment.
Single DMD time-multiplexed 64-views autostereoscopic 3D display
NASA Astrophysics Data System (ADS)
Loreti, Luigi
2013-03-01
Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links
Mixed Reality Meets Pharmaceutical Development.
Forrest, William P; Mackey, Megan A; Shah, Vivek M; Hassell, Kerry M; Shah, Prashant; Wylie, Jennifer L; Gopinath, Janakiraman; Balderhaar, Henning; Li, Li; Wuelfing, W Peter; Helmy, Roy
2017-12-01
As science evolves, the need for more efficient and innovative knowledge transfer capabilities becomes evident. Advances in drug discovery and delivery sciences have directly impacted the pharmaceutical industry, though the added complexities have not shortened the development process. These added complexities also make it difficult for scientists to rapidly and effectively transfer knowledge to offset the lengthened drug development timelines. While webcams, camera phones, and iPads have been explored as potential new methods of real-time information sharing, the non-"hands-free" nature and lack of viewer and observer point-of-view render them unsuitable for the R&D laboratory or manufacturing setting. As an alternative solution, the Microsoft HoloLens mixed-reality headset was evaluated as a more efficient, hands-free method of knowledge transfer and information sharing. After completing a traditional method transfer between 3 R&D sites (Rahway, NJ; West Point, PA and Schnachen, Switzerland), a retrospective analysis of efficiency gain was performed through the comparison of a mock method transfer between NJ and PA sites using the HoloLens. The results demonstrated a minimum 10-fold gain in efficiency, weighing in from a savings in time, cost, and the ability to have real-time data analysis and discussion. In addition, other use cases were evaluated involving vendor and contract research/manufacturing organizations. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Increasing Capacity Exploitation in Food Supply Chains Using Grid Concepts
NASA Astrophysics Data System (ADS)
Volk, Eugen; Müller, Marcus; Jacob, Ansger; Racz, Peter; Waldburger, Martin
Food supply chains today are characterized by fixed trade relations with long term contracts established between heterogeneous supply chain companies. Production and logistics capacities of these companies are often utilized in an economically inefficient manner only. In addition, increased consumer awareness in food safety issues renders supply chain management even more challenging, since integrated tracking and tracing along the whole food supply chain is needed. Facing these issues of supply chain management complexity and completely documented product quality, this paper proposes a full lifecycle solution for dynamic capacity markets based on concepts used in the field of Grid [1], like management of Virtual Organization (VO) combined with Service Level Agreement (SLA). The solution enables the cost-efficient utilization of real world capacities (e.g., production capacities or logistics facilities) by using a simple, browser-based portal. Users are able to enter into product-specific negotiations with buyers and suppliers of a food supply chain, and to obtain real-time access to product information including SLA evaluation reports. Thus, business opportunities in wider market access, process innovation, and trustworthy food products are offered for participating supply chain companies.
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2013-03-01
We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.
Use of the Uro Dyna-CT in endourology - the new frontier.
Vicentini, Fabio C; Botelho, Luiz A A; Braz, José L M; Almeida, Ernane S; Hisano, Marcelo
2017-01-01
We describe the use of the Uro Dyna-CT, an imaging system used in the operating room that produces real-time three-dimensional (3D) imaging and cross-sectional image reconstructions similar to an intraoperative computerized tomography, during a percutaneous nephrolithotomy and a contralateral flexible ureteroscopy in a complete supine position. A 65 year-old female patient had an incomplete calyceal staghorn stone in the right kidney and a 10mm in the left one. The procedure was uneventful and the intraoperative use of the Uro Dyna-CT identified 2 residual stones that were not found by digital fluoroscopy and flexible nephroscopy at the end of surgery, helping us to render the patient stone-free in one procedure, which was confirmed by a postoperative CT scan. Prospective studies will define the real role of the Uro Dyna-CT for endourological procedures, but its use seems to be a very promising tool for improving stone free rates and decreasing auxiliary procedures, especially for complex cases. Copyright® by the International Brazilian Journal of Urology.
MTO-like reference mask modeling for advanced inverse lithography technology patterns
NASA Astrophysics Data System (ADS)
Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min
2017-07-01
Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.
Light transport on path-space manifolds
NASA Astrophysics Data System (ADS)
Jakob, Wenzel Alban
The pervasive use of computer-generated graphics in our society has led to strict demands on their visual realism. Generally, users of rendering software want their images to look, in various ways, "real", which has been a key driving force towards methods that are based on the physics of light transport. Until recently, industrial practice has relied on a different set of methods that had comparatively little rigorous grounding in physics---but within the last decade, advances in rendering methods and computing power have come together to create a sudden and dramatic shift, in which physics-based methods that were formerly thought impractical have become the standard tool. As a consequence, considerable attention is now devoted towards making these methods as robust as possible. In this context, robustness refers to an algorithm's ability to process arbitrary input without large increases of the rendering time or degradation of the output image. One particularly challenging aspect of robustness entails simulating the precise interaction of light with all the materials that comprise the input scene. This dissertation focuses on one specific group of materials that has fundamentally been the most important source of difficulties in this process. Specular materials, such as glass windows, mirrors or smooth coatings (e.g. on finished wood), account for a significant percentage of the objects that surround us every day. It is perhaps surprising, then, that it is not well-understood how they can be accommodated within the theoretical framework that underlies some of the most sophisticated rendering methods available today. Many of these methods operate using a theoretical framework known as path space integration. But this framework makes no provisions for specular materials: to date, it is not clear how to write down a path space integral involving something as simple as a piece of glass. Although implementations can in practice still render these materials by side-stepping limitations of the theory, they often suffer from unusably slow convergence; improvements to this situation have been hampered by the lack of a thorough theoretical understanding. We address these problems by developing a new theory of path-space light transport which, for the first time, cleanly incorporates specular scattering into the standard framework. Most of the results obtained in the analysis of the ideally smooth case can also be generalized to rendering of glossy materials and volumetric scattering so that this dissertation also provides a powerful new set of tools for dealing with them. The basis of our approach is that each specular material interaction locally collapses the dimension of the space of light paths so that all relevant paths lie on a submanifold of path space. We analyze the high-dimensional differential geometry of this submanifold and use the resulting information to construct an algorithm that is able to "walk" around on it using a simple and efficient equation-solving iteration. This manifold walking algorithm then constitutes the key operation of a new type of Markov Chain Monte Carlo (MCMC) rendering method that computes lighting through very general families of paths that can involve arbitrary combinations of specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering. We demonstrate our implementation on a range of challenging scenes and evaluate it against previous methods.
IceT users' guide and reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
2011-01-01
The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.
Combined approach of shell and shear-warp rendering for efficient volume visualization
NASA Astrophysics Data System (ADS)
Falcao, Alexandre X.; Rocha, Leonardo M.; Udupa, Jayaram K.
2003-05-01
In Medical Imaging, shell rendering (SR) and shear-warp rendering (SWR) are two ultra-fast and effective methods for volume visualization. We have previously shown that, typically, SWR can be on the average 1.38 times faster than SR, but it requires from 2 to 8 times more memory space than SR. In this paper, we propose an extension of the compact shell data structure utilized in SR to allow shear-warp factorization of the viewing matrix in order to obtain speed up gains for SR, without paying the high storage price of SWR. The new approach is called shear-warp shell rendering (SWSR). The paper describes the methods, points out their major differences in the computational aspects, and presents a comparative analysis of them in terms of speed, storage, and image quality. The experiments involve hard and fuzzy boundaries of 10 different objects of various sizes, shapes, and topologies, rendered on a 1GHz Pentium-III PC with 512MB RAM, utilizing surface and volume rendering strategies. The results indicate that SWSR offers the best speed and storage characteristics compromise among these methods. We also show that SWSR improves the rendition quality over SR, and provides renditions similar to those produced by SWR.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bargellini, Irene, E-mail: irenebargellini@hotmail.com; Turini, Francesca; Bozzi, Elena
To assess feasibility of proper hepatic artery catheterization using a 3D model obtained from preprocedural computed tomographic angiography (CTA), fused with real-time fluoroscopy, during transarterial chemoembolization of hepatocellular carcinoma. Twenty consecutive cirrhotic patients with hepatocellular carcinoma undergoing transarterial chemoembolization were prospectively enrolled onto the study. The early arterial phase axial images of the preprocedural CTA were postprocessed on an independent workstation connected to the angiographic system (Innova 4100; GE Healthcare, Milwaukee, WI), obtaining a 3D volume rendering image (VR) that included abdominal aorta, splanchnic arteries, and first and second lumbar vertebrae. The VR image was manually registered to the real-timemore » X-ray fluoroscopy, with the lumbar spine used as the reference. The VR image was then used as guidance to selectively catheterize the proper hepatic artery. The procedure was considered successful when performed with no need for intraarterial contrast injections or angiographic acquisitions. The procedure was successful in 19 (95 %) of 20 patients. In one patient, celiac trunk angiography was required for the presence of a significant ostial stenosis that was underestimated at computed tomography. Time for image reconstruction and registration was <10 min in all cases. The use of preprocedural CTA model with fluoroscopy enables confident and direct catheterization of the proper hepatic artery with no need for preliminary celiac trunk angiography, thus reducing radiation exposure and contrast media administration.« less
The PARIGA server for real time filtering and analysis of reciprocal BLAST results.
Orsini, Massimiliano; Carcangiu, Simone; Cuccuru, Gianmauro; Uva, Paolo; Tramontano, Anna
2013-01-01
BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.
Zillohu, Ahnaf Usman; Abdelaziz, Ramzy; Homaeigohar, Shahin; Krasnov, Igor; Müller, Martin; Strunskus, Thomas; Elbahri, Mady
2014-01-01
We demonstrate a transferable device that can turn wettability of surfaces to sticky or slippy, as per requirement. It is composed of polymeric yarn with a fibrous structure, which can be lifted and placed on any surface to render it the unique wettability properties. We introduce Polyvinylidenefluoride (PVDF) random fiber as biomimetic rose petal surface. When it is decorated with PVDF nanofibers yarns, the random mesh transform from rose petal sticky state into grass leaf slippy state. When it is placed on sticky, hydrophilic metal coin, it converts the surface of the coin to super hydrophobic. Adjustments in the yarn system, like interyarn spacing, can be done in real time to influence its wettability, which is a unique feature. Next, we load the polymer with a photochromic compound for chemical restructuring. It affects the sliding angle of water drop and makes the fibers optically active. We also demonstrate a “water droplets lens” concept that enables erasable writing on photochromic rose petal sticky fibrous surface. The droplet on a highly hydrophobic surface acts as a ball lens to concentrate light onto a hot spot; thereby we demonstrate UV light writing with water lenses and visible light erasing. PMID:25491016
Authentic Astronomical Discovery in Planetariums: Data-Driven Immersive Lectures
NASA Astrophysics Data System (ADS)
Wyatt, Ryan Jason
2018-01-01
Planetariums are akin to “branch offices” for astronomy in major cities and other locations around the globe. With immersive, fulldome video technology, modern digital planetariums offer the opportunity to integrate authentic astronomical data into both pre-recorded shows and live lectures. At the California Academy of Sciences Morrison Planetarium, we host the monthly Benjamin Dean Astronomy Lecture Series, which features researchers describing their cutting-edge work to well-informed lay audiences. The Academy’s visualization studio and engineering teams work with researchers to visualize their data in both pre-rendered and real-time formats, and these visualizations are integrated into a variety of programs—including lectures! The assets are then made available to any other planetariums with similar software to support their programming. A lecturer can thus give the same immersive presentation to audiences in a variety of planetariums. The Academy has also collaborated with Chicago’s Adler Planetarium to bring Kavli Fulldome Lecture Series to San Francisco, and the two theaters have also linked together in live “domecasts” to share real-time content with audiences in both cities. These lecture series and other, similar projects suggest a bright future for astronomers to bring their research to the public in an immersive and visually compelling format.
Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments
Oh, Soo Youn; Bailenson, Jeremy; Krämer, Nicole; Li, Benjamin
2016-01-01
Previous studies demonstrated the positive effects of smiling on interpersonal outcomes. The present research examined if enhancing one’s smile in a virtual environment could lead to a more positive communication experience. In the current study, participants’ facial expressions were tracked and mapped on a digital avatar during a real-time dyadic conversation. The avatar’s smile was rendered such that it was either a slightly enhanced version or a veridical version of the participant’s actual smile. Linguistic analyses using the Linguistic Inquiry Word Count (LIWC) revealed that participants who communicated with each other via avatars that exhibited enhanced smiles used more positive words to describe their interaction experience compared to those who communicated via avatars that displayed smiling behavior reflecting the participants’ actual smiles. In addition, self-report measures showed that participants in the ‘enhanced smile’ condition felt more positive affect after the conversation and experienced stronger social presence compared to the ‘normal smile’ condition. These results are particularly striking when considering the fact that most participants (>90%) were unable to detect the smiling manipulation. This is the first study to demonstrate the positive effects of transforming unacquainted individuals’ actual smiling behavior during a real-time avatar-networked conversation. PMID:27603784
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
NASA Astrophysics Data System (ADS)
Dörner, Ralf; Lok, Benjamin; Broll, Wolfgang
Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.
Social patterns revealed through random matrix theory
NASA Astrophysics Data System (ADS)
Sarkar, Camellia; Jalan, Sarika
2014-11-01
Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.
Latency in Distributed Acquisition and Rendering for Telepresence Systems.
Ohl, Stephan; Willert, Malte; Staadt, Oliver
2015-12-01
Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.
Tiled vector data model for the geographical features of symbolized maps.
Li, Lin; Hu, Wei; Zhu, Haihong; Li, You; Zhang, Hang
2017-01-01
Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and 'addition' (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayr, S., E-mail: suvi.bayr@jyu.fi; Ojanperä, M.; Kaparaju, P.
Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was unstable in mono-digestion. • Free NH{sub 3} inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55more » °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH{sub 4}-N and/or free NH{sub 3}) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m{sup 3} d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm{sup 3}/kg VS{sub fed}. On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm{sup 3}/kg VS{sub fed}). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials.« less
Image simulation for HardWare In the Loop simulation in EO domain
NASA Astrophysics Data System (ADS)
Cathala, Thierry; Latger, Jean
2015-10-01
Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".
CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs.
Johnson, Micah K; Dale, Kevin; Avidan, Shai; Pfister, Hanspeter; Freeman, William T; Matusik, Wojciech
2011-09-01
Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.
NASA Technical Reports Server (NTRS)
Bargar, Robin
1995-01-01
The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.
Towards Deep Learning from Twitter for Improved Tsunami Alerts and Advisories
NASA Astrophysics Data System (ADS)
Lumb, L. I.; Freemantle, J. R.
2017-12-01
Data from social-networking services increasingly complements that from traditional sources in scenarios that seek to 'cultivate' situational awareness. As false-positive alerts and retracted advisories appear to suggest, establishing a causal connection between earthquakes and tsunamis remains an extant challenge that could prove life-critical. Because posts regarding such natural disasters typically 'trend' in real time via social media, we extract tweets in an effort to elucidate this cause-effect relationship from a very different perspective. To extract content of potential geophysical value from a multiplicity of 140-character tweets streamed in real time, we apply Natural Language Processing (NLP) to the unstructured data and metadata available via Twitter. In Deep Learning from Twitter, words such as "earthquake" are represented as vectors embedded in a corpora of tweets, whose proximity to words such as "tsunami" can be subsequently quantified. Furthermore, when use is made of pre-trained word vectors available for various reference corpora, geophysically credible tweets are rendered distinguishable by quantifying similarities through use of a word-vector dot product. Finally, word-vector analogies are shown to be promising in terms of deconstructing the earthquake-tsunami relationship in terms of the cumulative effect of multiple, contributing factors (see figure). Because diction is anticipated to differ in tweets that follow a tsunami-producing earthquake, our emphasis here is on the re-analysis of actual event data extracted from Twitter that quantifies word sense relative to earthquake-only events. If proven viable, our approach could complement those measures already in place to deliver real-time alerts and advisories following tsunami-causing earthquakes. With climate change accelerating the frequency of glacial calving, and in so doing providing an alternate, potential source for tsunamis, our approach is anticipated to be of value in broader contexts.
Navigation with Electromagnetic Tracking for Interventional Radiology Procedures
Wood, Bradford J.; Zhang, Hui; Durrani, Amir; Glossop, Neil; Ranjan, Sohan; Lindisch, David; Levy, Eliott; Banovac, Filip; Borgert, Joern; Krueger, Sascha; Kruecker, Jochen; Viswanathan, Anand; Cleary, Kevin
2008-01-01
PURPOSE To assess the feasibility of the use of preprocedural imaging for guide wire, catheter, and needle navigation with electromagnetic tracking in phantom and animal models. MATERIALS AND METHODS An image-guided intervention software system was developed based on open-source software components. Catheters, needles, and guide wires were constructed with small position and orientation sensors in the tips. A tetrahedral-shaped weak electromagnetic field generator was placed in proximity to an abdominal vascular phantom or three pigs on the angiography table. Preprocedural computed tomographic (CT) images of the phantom or pig were loaded into custom-developed tracking, registration, navigation, and rendering software. Devices were manipulated within the phantom or pig with guidance from the previously acquired CT scan and simultaneous real-time angiography. Navigation within positron emission tomography (PET) and magnetic resonance (MR) volumetric datasets was also performed. External and endovascular fiducials were used for registration in the phantom, and registration error and tracking error were estimated. RESULTS The CT scan position of the devices within phantoms and pigs was accurately determined during angiography and biopsy procedures, with manageable error for some applications. Preprocedural CT depicted the anatomy in the region of the devices with real-time position updating and minimal registration error and tracking error (<5 mm). PET can also be used with this system to guide percutaneous biopsies to the most metabolically active region of a tumor. CONCLUSIONS Previously acquired CT, MR, or PET data can be accurately codisplayed during procedures with reconstructed imaging based on the position and orientation of catheters, guide wires, or needles. Multimodality interventions are feasible by allowing the real-time updated display of previously acquired functional or morphologic imaging during angiography, biopsy, and ablation. PMID:15802449
Köller, Thomas; Kurze, Daniel; Lange, Mirjam; Scherdin, Martin; Podbielski, Andreas; Warnke, Philipp
2016-01-01
A fully automated multiplex real-time PCR assay—including a sample process control and a plasmid based positive control—for the detection and differentiation of herpes simplex virus 1 (HSV1), herpes simplex virus 2 (HSV2) and varicella-zoster virus (VZV) from cerebrospinal fluids (CSF) was developed on the BD Max platform. Performance was compared to an established accredited multiplex real time PCR protocol utilizing the easyMAG and the LightCycler 480/II, both very common devices in viral molecular diagnostics. For clinical validation, 123 CSF specimens and 40 reference samples from national interlaboratory comparisons were examined with both methods, resulting in 97.6% and 100% concordance for CSF and reference samples, respectively. Utilizing the BD Max platform revealed sensitivities of 173 (CI 95%, 88–258) copies/ml for HSV1, 171 (CI 95%, 148–194) copies/ml for HSV2 and 84 (CI 95%, 5–163) copies/ml for VZV. Cross reactivity could be excluded by checking 25 common viral, bacterial and fungal human pathogens. Workflow analyses displayed shorter test duration as well as remarkable fewer and easier preparation steps with the potential to reduce error rates occurring when manually assessing patient samples. This protocol allows for a fully automated PCR assay on the BD Max platform for the simultaneously detection of herpesviridae from CSF specimens. Singular or multiple infections due to HSV1, HSV2 and VZV can reliably be differentiated with good sensitivities. Control parameters are included within the assay, thereby rendering its suitability for current quality management requirements. PMID:27092772
Near Real Time Tools for ISS Plasma Science and Engineering Applications
NASA Astrophysics Data System (ADS)
Minow, J. I.; Willis, E. M.; Parker, L. N.; Shim, J.; Kuznetsova, M. M.; Pulkkinen, A. A.
2013-12-01
The International Space Station (ISS) program utilizes a plasma environment forecast for estimating electrical charging hazards for crews during extravehicular activity (EVA). The process uses ionospheric electron density and temperature measurements from the ISS Floating Potential Measurement Unit (FPMU) instrument suite with the assumption that the plasma conditions will remain constant for one to fourteen days with a low probability for a space weather event which would significantly change the environment before an EVA. FPMU data is typically not available during EVA's, therefore, the most recent FPMU data available for characterizing the state of the ionosphere during EVA is typically a day or two before the start of an EVA or after the EVA has been completed. In addition to EVA support, information on ionospheric plasma densities is often needed for support of ISS science payloads and anomaly investigations during periods when the FPMU is not operating. This presentation describes the application of space weather tools developed by MSFC using data from near real time satellite radio occultation and ground based ionosonde measurements of ionospheric electron density and a first principle ionosphere model providing electron density and temperature run in a real time mode by GSFC. These applications are used to characterize the space environment during EVA periods when FPMU data is not available, monitor for large charges in ionosphere density that could render the ionosphere forecast and plasma hazard assessment invalid, and validate the assumption of 'persistence of conditions' used in deriving the hazard forecast. In addition, the tools are used to provide space environment input to science payloads on ISS and anomaly investigations during periods the FPMU is not operating.
Prisman, Eitan; Daly, Michael J; Chan, Harley; Siewerdsen, Jeffrey H; Vescan, Allan; Irish, Jonathan C
2011-01-01
Custom software was developed to integrate intraoperative cone-beam computed tomography (CBCT) images with endoscopic video for surgical navigation and guidance. A cadaveric head was used to assess the accuracy and potential clinical utility of the following functionality: (1) real-time tracking of the endoscope in intraoperative 3-dimensional (3D) CBCT; (2) projecting an orthogonal reconstructed CBCT image, at or beyond the endoscope, which is parallel to the tip of the endoscope corresponding to the surgical plane; (3) virtual reality fusion of endoscopic video and 3D CBCT surface rendering; and (4) overlay of preoperatively defined contours of anatomical structures of interest. Anatomical landmarks were contoured in CBCT of a cadaveric head. An experienced endoscopic surgeon was oriented to the software and asked to rate the utility of the navigation software in carrying out predefined surgical tasks. Utility was evaluated using a rating scale for: (1) safely completing the task; and (2) potential for surgical training. Surgical tasks included: (1) uncinectomy; (2) ethmoidectomy; (3) sphenoidectomy/pituitary resection; and (4) clival resection. CBCT images were updated following each ablative task. As a teaching tool, the software was evaluated as "very useful" for all surgical tasks. Regarding safety and task completion, the software was evaluated as "no advantage" for task (1), "minimal" for task (2), and "very useful" for tasks (3) and (4). Landmark identification for structures behind bone was "very useful" for both categories. The software increased surgical confidence in safely completing challenging ablative tasks by presenting real-time image guidance for highly complex ablative procedures. In addition, such technology offers a valuable teaching aid to surgeons in training. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
Transient imaging for real-time tracking around a corner
NASA Astrophysics Data System (ADS)
Klein, Jonathan; Laurenzis, Martin; Hullin, Matthias
2016-10-01
Non-line-of-sight imaging is a fascinating emerging area of research and expected to have an impact in numerous application fields including civilian and military sensing. Performance of human perception and situational awareness can be extended by the sensing of shapes and movement around a corner in future scenarios. Rather than seeing through obstacles directly, non-line-of-sight imaging relies on analyzing indirect reflections of light that traveled around the obstacle. In previous work, transient imaging was established as the key mechanic to enable the extraction of useful information from such reflections. So far, a number of different approaches based on transient imaging have been proposed, with back projection being the most prominent one. Different hardware setups were used for the acquisition of the required data, however all of them have severe drawbacks such as limited image quality, long capture time or very high prices. In this paper we propose the analysis of synthetic transient renderings to gain more insights into the transient light transport. With this simulated data, we are no longer bound to the imperfect data of real systems and gain more flexibility and control over the analysis. In a second part, we use the insights of our analysis to formulate a novel reconstruction algorithm. It uses an adapted light simulation to formulate an inverse problem which is solved in an analysis-by-synthesis fashion. Through rigorous optimization of the reconstruction, it then becomes possible to track known objects outside the line of side in real time. Due to the forward formulation of the light transport, the algorithm is easily expandable to more general scenarios or different hardware setups. We therefore expect it to become a viable alternative to the classic back projection approach in the future.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Lessons Learned from OSIRIS-Rex Autonomous Navigation Using Natural Feature Tracking
NASA Technical Reports Server (NTRS)
Lorenz, David A.; Olds, Ryan; May, Alexander; Mario, Courtney; Perry, Mark E.; Palmer, Eric E.; Daly, Michael
2017-01-01
The Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (Osiris-REx) spacecraft is scheduled to launch in September, 2016 to embark on an asteroid sample return mission. It is expected to rendezvous with the asteroid, Bennu, navigate to the surface, collect a sample (July 20), and return the sample to Earth (September 23). The original mission design called for using one of two Flash Lidar units to provide autonomous navigation to the surface. Following Preliminary design and initial development of the Lidars, reliability issues with the hardware and test program prompted the project to begin development of an alternative navigation technique to be used as a backup to the Lidar. At the critical design review, Natural Feature Tracking (NFT) was added to the mission. NFT is an onboard optical navigation system that compares observed images to a set of asteroid terrain models which are rendered in real-time from a catalog stored in memory on the flight computer. Onboard knowledge of the spacecraft state is then updated by a Kalman filter using the measured residuals between the rendered reference images and the actual observed images. The asteroid terrain models used by NFT are built from a shape model generated from observations collected during earlier phases of the mission and include both terrain shape and albedo information about the asteroid surface. As a result, the success of NFT is highly dependent on selecting a set of topographic features that can be both identified during descent as well as reliably rendered using the shape model data available. During development, the OSIRIS-REx team faced significant challenges in developing a process conducive to robust operation. This was especially true for terrain models to be used as the spacecraft gets close to the asteroid and higher fidelity models are required for reliable image correlation. This paper will present some of the challenges and lessons learned from the development of the NFT system which includes not just the flight hardware and software but the development of the terrain models used to generate the onboard rendered images.
NASA Astrophysics Data System (ADS)
Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo
2011-09-01
Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.
Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects
NASA Astrophysics Data System (ADS)
Beddiaf, Ali; Babahenini, Mohamed Chaouki
2018-03-01
Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
Visualization and simulation techniques for surgical simulators using actual patient's data.
Radetzky, Arne; Nürnberger, Andreas
2002-11-01
Because of the increasing complexity of surgical interventions research in surgical simulation became more and more important over the last years. However, the simulation of tissue deformation is still a challenging problem, mainly due to the short response times that are required for real-time interaction. The demands to hard and software are even larger if not only the modeled human anatomy is used but the anatomy of actual patients. This is required if the surgical simulator should be used as training medium for expert surgeons rather than students. In this article, suitable visualization and simulation methods for surgical simulation utilizing actual patient's datasets are described. Therefore, the advantages and disadvantages of direct and indirect volume rendering for the visualization are discussed and a neuro-fuzzy system is described, which can be used for the simulation of interactive tissue deformations. The neuro-fuzzy system makes it possible to define the deformation behavior based on a linguistic description of the tissue characteristics or to learn the dynamics by using measured data of real tissue. Furthermore, a simulator for minimally-invasive neurosurgical interventions is presented that utilizes the described visualization and simulation methods. The structure of the simulator is described in detail and the results of a system evaluation by an experienced neurosurgeon--a quantitative comparison between different methods of virtual endoscopy as well as a comparison between real brain images and virtual endoscopies--are given. The evaluation proved that the simulator provides a higher realism of the visualization and simulation then other currently available simulators. Copyright 2002 Elsevier Science B.V.
Ntourakis, Dimitrios; Memeo, Ricardo; Soler, Luc; Marescaux, Jacques; Mutter, Didier; Pessaux, Patrick
2016-02-01
Modern chemotherapy achieves the shrinking of colorectal cancer liver metastases (CRLM) to such extent that they may disappear from radiological imaging. Disappearing CRLM rarely represents a complete pathological remission and have an important risk of recurrence. Augmented reality (AR) consists in the fusion of real-time patient images with a computer-generated 3D virtual patient model created from pre-operative medical imaging. The aim of this prospective pilot study is to investigate the potential of AR navigation as a tool to help locate and surgically resect missing CRLM. A 3D virtual anatomical model was created from thoracoabdominal CT-scans using customary software (VR RENDER(®), IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM(®), Karl Storz, Tüttlingen, Germany). Virtual and real images were manually registered in real-time using a video mixer, based on external anatomical landmarks with an estimated accuracy of 5 mm. This modality was tested in three patients, with four missing CRLM that had sizes from 12 to 24 mm, undergoing laparotomy after receiving pre-operative oxaliplatin-based chemotherapy. AR display and fine registration was performed within 6 min. AR helped detect all four missing CRLM, and guided their resection. In all cases the planned security margin of 1 cm was clear and resections were confirmed to be R0 by pathology. There was no postoperative major morbidity or mortality. No local recurrence occurred in the follow-up period of 6-22 months. This initial experience suggests that AR may be a helpful navigation tool for the resection of missing CRLM.
High-quality and interactive animations of 3D time-varying vector fields.
Helgeland, Anders; Elboth, Thomas
2006-01-01
In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.
Creating a Prototype Web Application for Spacecraft Real-Time Data Visualization on Mobile Devices
NASA Technical Reports Server (NTRS)
Lang, Jeremy S.; Irving, James R.
2014-01-01
Mobile devices (smart phones, tablets) have become commonplace among almost all sectors of the workforce, especially in the technical and scientific communities. These devices provide individuals the ability to be constantly connected to any area of interest they may have, whenever and wherever they are located. The Huntsville Operations Support Center (HOSC) is attempting to take advantage of this constant connectivity to extend the data visualization component of the Payload Operations and Integration Center (POIC) to a person's mobile device. POIC users currently have a rather unique capability to create custom user interfaces in order to view International Space Station (ISS) payload health and status telemetry. These displays are used at various console positions within the POIC. The Software Engineering team has created a Mobile Display capability that will allow authenticated users to view the same displays created for the console positions on the mobile device of their choice. Utilizing modern technologies including ASP.net, JavaScript, and HTML5, we have created a web application that renders the user's displays in any modern desktop or mobile web browser, regardless of the operating system on the device. Additionally, the application is device aware which enables it to render its configuration and selection menus with themes that correspond to the particular device. The Mobile Display application uses a communication mechanism known as signalR to push updates to the web client. This communication mechanism automatically detects the best communication protocol between the client and server and also manages disconnections and reconnections of the client to the server. One benefit of this application is that the user can monitor important telemetry even while away from their console position. If expanded to the scientific community, this application would allow a scientist to view a snapshot of the state of their particular experiment at any time or place. Because the web application renders the displays that can currently be created with the POIC ground system, the user can tailor their displays for a particular device using tools that they are already trained to use.
Human detection in sensitive security areas through recognition of omega shapes using MACH filters
NASA Astrophysics Data System (ADS)
Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert
2015-03-01
Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.
ScreenMasker: An Open-source Gaze-contingent Screen Masking Environment.
Orlov, Pavel A; Bednarik, Roman
2016-09-01
The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .
EHDViz: clinical dashboard development using open-source technologies.
Badgeley, Marcus A; Shameer, Khader; Glicksberg, Benjamin S; Tomlinson, Max S; Levin, Matthew A; McCormick, Patrick J; Kasarskis, Andrew; Reich, David L; Dudley, Joel T
2016-03-24
To design, develop and prototype clinical dashboards to integrate high-frequency health and wellness data streams using interactive and real-time data visualisation and analytics modalities. We developed a clinical dashboard development framework called electronic healthcare data visualization (EHDViz) toolkit for generating web-based, real-time clinical dashboards for visualising heterogeneous biomedical, healthcare and wellness data. The EHDViz is an extensible toolkit that uses R packages for data management, normalisation and producing high-quality visualisations over the web using R/Shiny web server architecture. We have developed use cases to illustrate utility of EHDViz in different scenarios of clinical and wellness setting as a visualisation aid for improving healthcare delivery. Using EHDViz, we prototyped clinical dashboards to demonstrate the contextual versatility of EHDViz toolkit. An outpatient cohort was used to visualise population health management tasks (n=14,221), and an inpatient cohort was used to visualise real-time acuity risk in a clinical unit (n=445), and a quantified-self example using wellness data from a fitness activity monitor worn by a single individual was also discussed (n-of-1). The back-end system retrieves relevant data from data source, populates the main panel of the application and integrates user-defined data features in real-time and renders output using modern web browsers. The visualisation elements can be customised using health features, disease names, procedure names or medical codes to populate the visualisations. The source code of EHDViz and various prototypes developed using EHDViz are available in the public domain at http://ehdviz.dudleylab.org. Collaborative data visualisations, wellness trend predictions, risk estimation, proactive acuity status monitoring and knowledge of complex disease indicators are essential components of implementing data-driven precision medicine. As an open-source visualisation framework capable of integrating health assessment, EHDViz aims to be a valuable toolkit for rapid design, development and implementation of scalable clinical data visualisation dashboards. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Horn, M; Nolde, J; Goltz, J P; Barkhausen, J; Schade, W; Waltermann, C; Modersitzki, J; Olesch, J; Papenberg, N; Keck, T; Kleemann, M
2015-10-01
Over the last decade endovascular stenting of aortic aneurysm (EVAR) has been developed from single centre experiences to a standard procedure. With increasing clinical expertise and medical technology advances treatment of even complex aneurysms are feasible by endovascular methods. One integral part for the success of this minimally invasive procedure is innovative and improved vascular imaging to generate exact measurements and correct placement of stent prosthesis. One of the greatest difficulty in learning and performing this endovascular therapy is the fact that the three-dimensional vascular tree has to be overlaid with the two-dimensional angiographic scene by the vascular surgeon. We report the development of real-time navigation software, which allows a three-dimensional endoluminal view of the vascular system during an EVAR procedure in patients with infrarenal aortic aneurysm. We used the preoperative planning CT angiography for three-dimensional reconstruction of aortic anatomy by volume-rendered segmentation. At the beginning of the intervention the relevant landmarks are matched in real-time with the two-dimensional angiographic scene. During the intervention the software continously registers the position of the guide-wire or the stent. An additional 3D-screen shows the generated endoluminal view during the whole intervention in real-time. We examined the combination of hardware and software components including complex image registration and fibre optic sensor technology (fibre-bragg navigation) with integration in stent graft introducer sheaths using patient-specific vascular phantoms in an experimental setting. From a technical point of view the feasibility of fibre-Bragg navigation has been proven in our experimental setting with patient-based vascular models. Three-dimensional preoperative planning including registration and simulation of virtual angioscopy in real time are realised. The aim of the Nav-CARS-EVAR concept is reduction of contrast medium and radiation dose by a three-dimensional navigation during the EVAR procedure. To implement fibre-Bragg navigation further experimental studies are necessary to verify accuracy before clinical application. Georg Thieme Verlag KG Stuttgart · New York.
Mitochondrial Dynamics Tracking with Two-Photon Phosphorescent Terpyridyl Iridium(III) Complexes
NASA Astrophysics Data System (ADS)
Huang, Huaiyi; Zhang, Pingyu; Qiu, Kangqiang; Huang, Juanjuan; Chen, Yu; Ji, Liangnian; Chao, Hui
2016-02-01
Mitochondrial dynamics, including fission and fusion, control the morphology and function of mitochondria, and disruption of mitochondrial dynamics leads to Parkinson’s disease, Alzheimer’s disease, metabolic diseases, and cancers. Currently, many types of commercial mitochondria probes are available, but high excitation energy and low photo-stability render them unsuitable for tracking mitochondrial dynamics in living cells. Therefore, mitochondrial targeting agents that exhibit superior anti-photo-bleaching ability, deep tissue penetration and intrinsically high three-dimensional resolutions are urgently needed. Two-photon-excited compounds that use low-energy near-infrared excitation lasers have emerged as non-invasive tools for cell imaging. In this work, terpyridyl cyclometalated Ir(III) complexes (Ir1-Ir3) are demonstrated as one- and two-photon phosphorescent probes for real-time imaging and tracking of mitochondrial morphology changes in living cells.
Lee, W Anthony
2007-01-01
The gold standard for preoperative evaluation of an aortic aneurysm is a computed tomography angiogram (CTA). Three-dimensional reconstruction and analysis of the computed tomography data set is enormously helpful, and even sometimes essential, in proper sizing and planning for endovascular stent graft repair. To a large extent, it has obviated the need for conventional angiography for morphologic evaluation. The TeraRecon Aquarius workstation (San Mateo, Calif) represents a highly sophisticated but user-friendly platform utilizing a combination of task-specific hardware and software specifically designed to rapidly manipulate large Digital Imaging and Communications in Medicine (DICOM) data sets and provide surface-shaded and multiplanar renderings in real-time. This article discusses the basics of sizing and planning for endovascular abdominal aortic aneurysm repair and the role of 3-dimensional analysis using the TeraRecon workstation.
Web GIS in practice VIII: HTML5 and the canvas element for interactive online mapping.
Boulos, Maged N Kamel; Warren, Jeffrey; Gong, Jianya; Yue, Peng
2010-03-03
HTML5 is being developed as the next major revision of HTML (Hypertext Markup Language), the core markup language of the World Wide Web. It aims at reducing the need for proprietary, plug-in-based rich Internet application (RIA) technologies such as Adobe Flash. The canvas element is part of HTML5 and is used to draw graphics using scripting (e.g., JavaScript). This paper introduces Cartagen, an open-source, vector-based, client-side framework for rendering plug-in-free, offline-capable, interactive maps in native HTML5 on a wide range of Web browsers and mobile phones. Cartagen was developed at MIT Media Lab's Design Ecology group. Potential applications of the technology as an enabler for participatory online mapping include mapping real-time air pollution, citizen reporting, and disaster response, among many other possibilities.
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
Use of the World Wide Web for multisite data collection.
Subramanian, A K; McAfee, A T; Getzinger, J P
1997-08-01
As access to the Internet becomes increasingly available, research applications in medicine will increase. This paper describes the use of the Internet, and, more specifically, the World Wide Web (WWW), as a channel of communication between EDs throughout the world and investigators who are interested in facilitating the collection of data from multiple sites. Data entered into user-friendly electronic surveys can be transmitted over the Internet to a database located at the site of the study, rendering geographic separation less of a barrier to the conduction of multisite studies. The electronic format of the data can enable real-time statistical processing while data are stored using existing database technologies. In theory, automated processing of variables within such a database enables early identification of data trends. Methods of ensuring validity, security, and compliance are discussed.
Evaluation of a hyperspectral image database for demosaicking purposes
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Süsstrunk, Sabine
2011-01-01
We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.
A Single Swede Midge (Diptera: Cecidomyiidae) Larva Can Render Cauliflower Unmarketable.
Stratton, Chase A; Hodgdon, Elisabeth A; Zuckerman, Samuel G; Shelton, Anthony M; Chen, Yolanda H
2018-05-01
Swede midge, Contarinia nasturtii Kieffer (Diptera: Cecidomyiidae), is an invasive pest causing significant damage on Brassica crops in the Northeastern United States and Eastern Canada. Heading brassicas, like cauliflower, appear to be particularly susceptible. Swede midge is difficult to control because larvae feed concealed inside meristematic tissues of the plant. In order to develop damage and marketability thresholds necessary for integrated pest management, it is important to determine how many larvae render plants unmarketable and whether the timing of infestation affects the severity of damage. We manipulated larval density (0, 1, 3, 5, 10, or 20) per plant and the timing of infestation (30, 55, and 80 d after seeding) on cauliflower in the lab and field to answer the following questions: 1) What is the swede midge damage threshold? 2) How many swede midge larvae can render cauliflower crowns unmarketable? and 3) Does the age of cauliflower at infestation influence the severity of damage and marketability? We found that even a single larva can cause mild twisting and scarring in the crown rendering cauliflower unmarketable 52% of the time, with more larvae causing more severe damage and additional losses, regardless of cauliflower age at infestation.
The coordinate-based meta-analysis of neuroimaging data.
Samartsidis, Pantelis; Montagna, Silvia; Nichols, Thomas E; Johnson, Timothy D
2017-01-01
Neuroimaging meta-analysis is an area of growing interest in statistics. The special characteristics of neuroimaging data render classical meta-analysis methods inapplicable and therefore new methods have been developed. We review existing methodologies, explaining the benefits and drawbacks of each. A demonstration on a real dataset of emotion studies is included. We discuss some still-open problems in the field to highlight the need for future research.
The coordinate-based meta-analysis of neuroimaging data
Samartsidis, Pantelis; Montagna, Silvia; Nichols, Thomas E.; Johnson, Timothy D.
2017-01-01
Neuroimaging meta-analysis is an area of growing interest in statistics. The special characteristics of neuroimaging data render classical meta-analysis methods inapplicable and therefore new methods have been developed. We review existing methodologies, explaining the benefits and drawbacks of each. A demonstration on a real dataset of emotion studies is included. We discuss some still-open problems in the field to highlight the need for future research. PMID:29545671
Standardized rendering from IR surveillance motion imagery
NASA Astrophysics Data System (ADS)
Prokoski, F. J.
2014-06-01
Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.
Real-time 3D image reconstruction guidance in liver resection surgery
Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-01-01
Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598
Ultrafast dark-field surface inspection with hybrid-dispersion laser scanning
NASA Astrophysics Data System (ADS)
Yazaki, Akio; Kim, Chanju; Chan, Jacky; Mahjoubfar, Ata; Goda, Keisuke; Watanabe, Masahiro; Jalali, Bahram
2014-06-01
High-speed surface inspection plays an important role in industrial manufacturing, safety monitoring, and quality control. It is desirable to go beyond the speed limitation of current technologies for reducing manufacturing costs and opening a new window onto a class of applications that require high-throughput sensing. Here, we report a high-speed dark-field surface inspector for detection of micrometer-sized surface defects that can travel at a record high speed as high as a few kilometers per second. This method is based on a modified time-stretch microscope that illuminates temporally and spatially dispersed laser pulses on the surface of a fast-moving object and detects scattered light from defects on the surface with a sensitive photodetector in a dark-field configuration. The inspector's ability to perform ultrafast dark-field surface inspection enables real-time identification of difficult-to-detect features on weakly reflecting surfaces and hence renders the method much more practical than in the previously demonstrated bright-field configuration. Consequently, our inspector provides nearly 1000 times higher scanning speed than conventional inspectors. To show our method's broad utility, we demonstrate real-time inspection of the surface of various objects (a non-reflective black film, transparent flexible film, and reflective hard disk) for detection of 10 μm or smaller defects on a moving target at 20 m/s within a scan width of 25 mm at a scan rate of 90.9 MHz. Our method holds promise for improving the cost and performance of organic light-emitting diode displays for next-generation smart phones, lithium-ion batteries for green electronics, and high-efficiency solar cells.
Support for Online Calibration in the ALICE HLT Framework
NASA Astrophysics Data System (ADS)
Krzewicki, Mikolaj; Rohr, David; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Shahoyan, Ruben; Lindenstruth, Volker;
2017-10-01
The ALICE detector employs sub detectors sensitive to environmental conditions such as pressure and temperature, e.g. the time projection chamber (TPC). A precise reconstruction of particle trajectories requires precise calibration of these detectors. Performing the calibration in real time in the HLT improves the online reconstruction and potentially renders certain offline calibration steps obsolete, speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. In order to run the calibration online, the HLT now supports the processing of tasks that typically run offline. These tasks run massively in parallel on all HLT compute nodes and their output is gathered and merged periodically. The calibration results are both stored offline for later use and fed back into the HLT chain via a feedback loop in order to apply calibration information to the online track reconstruction. Online calibration and feedback loop are subject to certain time constraints in order to provide up-to-date calibration information and they must not interfere with ALICE data taking. Our approach to run these tasks in asynchronous processes enables us to separate them from normal data taking in a way that makes it failure resilient. We performed a first test of online TPC drift time calibration under real conditions during the heavy-ion run in December 2015. We present an analysis and conclusions of this first test, new improvements and developments based on this, as well as our current scheme to commission this for production use.
Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine
NASA Astrophysics Data System (ADS)
Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.
2017-12-01
Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579
UTOPIA-User-Friendly Tools for Operating Informatics Applications.
Pettifer, S R; Sinnott, J R; Attwood, T K
2004-01-01
Bioinformaticians routinely analyse vast amounts of information held both in large remote databases and in flat data files hosted on local machines. The contemporary toolkit available for this purpose consists of an ad hoc collection of data manipulation tools, scripting languages and visualization systems; these must often be combined in complex and bespoke ways, the result frequently being an unwieldy artefact capable of one specific task, which cannot easily be exploited or extended by other practitioners. Owing to the sizes of current databases and the scale of the analyses necessary, routine bioinformatics tasks are often automated, but many still require the unique experience and intuition of human researchers: this requires tools that support real-time interaction with complex datasets. Many existing tools have poor user interfaces and limited real-time performance when applied to realistically large datasets; much of the user's cognitive capacity is therefore focused on controlling the tool rather than on performing the research. The UTOPIA project is addressing some of these issues by building reusable software components that can be combined to make useful applications in the field of bioinformatics. Expertise in the fields of human computer interaction, high-performance rendering, and distributed systems is being guided by bioinformaticians and end-user biologists to create a toolkit that is both architecturally sound from a computing point of view, and directly addresses end-user and application-developer requirements.
Zepeda, Angélica; Arias, Clorinda; Flores-Jasso, Fabian; Vaca, Luis
2013-01-01
RNAs are present within eukaryotic cells and are involved in several biological processes. RNA transport within cell compartments is important for proper cell function. To understand in depth the cellular processes in which RNA is involved requires a method that reveals RNA localization in real time in a sub-cellular context in living cells. In this protocol we describe a method for imaging RNA in living cells and in particular in neuronal cultures based on cell microinjection of molecular beacons in conjunction with confocal microscopy. This methodology overcomes some of the main obstacles for imaging RNA in live cells since microinjection allows the delivery of the probe to a desired cellular compartment and MBs bind with high specificity to its target RNA without inhibiting its function. The proper design of the MBs is essential to obtain RNA-MB association at the temperature of the cell cytosol. MBs design with other purposes in mind (such as PCR experiments) have a design that facilitates association to its target at high temperatures, rendering them unsuitable for live cell imaging. Using the methodology described in this chapter allows the study of RNA transport to different regions of neurons and may be combined with the tagging of proteins of interest to measure co-transport of the protein and the RNA to different cellular regions. Copyright © 2013 Elsevier Inc. All rights reserved.
Can natural and virtual environments be used to promote improved human health and wellbeing?
Depledge, M H; Stone, R J; Bird, W J
2011-06-01
Exposure of individuals to natural environments, such as forests and coastlines, can promote stress reduction and assist in mental recovery following intensive cognitive activities. Settings as simple as hospital window views onto garden-like scenes can also be influential in reducing patients' postoperative recovery periods and analgesic requirements. This paper reviews the evidence supporting the exploitation of these restorative natural environments in future healthcare strategies. The paper also describes early research addressing the development of multisensory, computer-generated restorative environments for the benefit of patients with a variety of psychologically related conditions (including depression, attention deficit disorder, pain, and sleep deficit), who may be unable to access and experience real natural environments, such as those in hospices, military rehabilitation centers, and long-term care facilities. The Table of Contents art is a virtual reconstruction of Wembury Bay, in the southwest of the UK, based on imported Digital Terrain Elevation Data (DTED) to provide the topography and a high-resolution aerial image to provide a template for the location of 3D building and vegetation models, rock features, and pathways. The 3D environment is rendered using the Unity 3 Game Development Tool and includes spatial sound effects (waves, wind, birdsong, etc.), physics-based features (such as early morning sea mist), time-of-day cycles, and real-time weather changes. The Village Church of St. Werburgh can also be seen in this image.
Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants
NASA Technical Reports Server (NTRS)
Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.
1996-01-01
Progress and results in the development of an integrated air quality modeling, monitoring, fault detection, and isolation system are presented. The focus was on development of distributed models of the air contaminants transport, the study of air quality monitoring techniques based on the model of transport process and on-line contaminant concentration measurements, and sensor placement. Different approaches to the modeling of spacecraft air contamination are discussed, and a three-dimensional distributed parameter air contaminant dispersion model applicable to both laminar and turbulent transport is proposed. A two-dimensional approximation of a full scale transport model is also proposed based on the spatial averaging of the three dimensional model over the least important space coordinate. A computer implementation of the transport model is considered and a detailed development of two- and three-dimensional models illustrated by contaminant transport simulation results is presented. The use of a well established Kalman filtering approach is suggested as a method for generating on-line contaminant concentration estimates based on both real time measurements and the model of contaminant transport process. It is shown that high computational requirements of the traditional Kalman filter can render difficult its real-time implementation for high-dimensional transport model and a novel implicit Kalman filtering algorithm is proposed which is shown to lead to an order of magnitude faster computer implementation in the case of air quality monitoring.
Multimodality Instrument for Tissue Characterization
NASA Technical Reports Server (NTRS)
Mah, Robert W. (Inventor); Andrews, Russell J. (Inventor)
2000-01-01
A system with multimodality instrument for tissue identification includes a computer-controlled motor driven heuristic probe with a multisensory tip is discussed. For neurosurgical applications, the instrument is mounted on a stereotactic frame for the probe to penetrate the brain in a precisely controlled fashion. The resistance of the brain tissue being penetrated is continually monitored by a miniaturized strain gauge attached to the probe tip. Other modality sensors may be mounted near the probe tip to provide real-time tissue characterizations and the ability to detect the proximity of blood vessels, thus eliminating errors normally associated with registration of pre-operative scans, tissue swelling, elastic tissue deformation, human judgement, etc., and rendering surgical procedures safer, more accurate, and efficient. A neural network, program adaptively learns the information on resistance and other characteristic features of normal brain tissue during the surgery and provides near real-time modeling. A fuzzy logic interface to the neural network program incorporates expert medical knowledge in the learning process. Identification of abnormal brain tissue is determined by the detection of change and comparison with previously learned models of abnormal brain tissues. The operation of the instrument is controlled through a user friendly graphical interface. Patient data is presented in a 3D stereographics display. Acoustic feedback of selected information may optionally be provided. Upon detection of the close proximity to blood vessels or abnormal brain tissue, the computer-controlled motor immediately stops probe penetration.
Aoki, Eduardo Massaharu; Cortes, Arthur Rodriguez Gonzalez; Arita, Emiko Saito
2015-01-01
The aim of the current technical report was to introduce a computed tomographic (CT) application for mobile devices as a diagnostic tool for analyzing CT images. An iPad and an iPhone (Apple, Cuppertino, CA) were used to navigate through multiplanar reconstructions of cone beam CT scans, using an application derived from the OsiriX CT software. Tools and advantages of this method were recorded. In addition, images rendered in the iPad were manipulated during dental implant placement and grafting procedures to follow up and confirm the implant digital planning in real time. The study population consisted of 10 patients. In all cases, it was possible to use image manipulation tools, such as changing contrast and brightness, zooming, rotating, panning, performing both linear and area measurements, and analyzing gray-scale values of a region of interest. Furthermore, it was possible to use the OsiriX application in the dental clinic where the study was conducted, to follow-up the analyzed implant placement and grafting procedures at the chairside. The current findings suggest that technological and practical methods to visualize radiographic images are invaluable resources to improve training, teaching, networking, and the performance of real-time follow-up of oral and maxillofacial surgical procedures. This article discusses the advantages and disadvantages of introducing this new technology in the clinical routine.
40 CFR 164.91 - Accelerated decision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... decision. (a) General. The Administrative Law Judge, in his discretion, may at any time render an accelerated decision in favor of Respondent as to all or any portion of the proceeding, including dismissal... matter of law; or (8) Such other and further reasons as are just. (b) Effect. A decision rendered under...
Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge
2008-01-01
This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.
Human-scale interaction for virtual model displays: a clear case for real tools
NASA Astrophysics Data System (ADS)
Williams, George C.; McDowall, Ian E.; Bolas, Mark T.
1998-04-01
We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.
Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten
2017-11-01
Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.
Modeling Images of Natural 3D Surfaces: Overview and Potential Applications
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre; Kuehnel, Frank; Stutz, John
2004-01-01
Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.
Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, G,; Houston, M.; Ng, Y.-R.
2002-01-11
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications thatmore » use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.« less
A weighted optimization approach to time-of-flight sensor fusion.
Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger
2014-01-01
Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.
Potential Applicability of Just-In-Time Inventory Management Within the Navy.
1995-12-01
Naval Postgraduate School, Monterey, CA, 1992. 7. Heizer & Render , PRODUCTION AND OPERATION MANAGEMENT (3rd ED), Massachusetts, Simon & Schuster, Inc...applicable end item. If the failure of an item would render the end item inoperable, the item is assigned a Military Essentiality Code (MEC) of ŕ." If the
Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter
2018-01-01
Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Future Evolution of Virtual Worlds as Communication Environments
NASA Astrophysics Data System (ADS)
Prisco, Giulio
Extensive experience creating locations and activities inside virtual worlds provides the basis for contemplating their future. Users of virtual worlds are diverse in their goals for these online environments; for example, immersionists want them to be alternative realities disconnected from real life, whereas augmentationists want them to be communication media supporting real-life activities. As the technology improves, the diversity of virtual worlds will increase along with their significance. Many will incorporate more advanced virtual reality, or serve as major media for long-distance collaboration, or become the venues for futurist social movements. Key issues are how people can create their own virtual worlds, travel across worlds, and experience a variety of multimedia immersive environments. This chapter concludes by noting the view among some computer scientists that future technologies will permit uploading human personalities to artificial intelligence avatars, thereby enhancing human beings and rendering the virtual worlds entirely real.
Mor, Orna; Gozlan, Yael; Wax, Marina; Mileguir, Fernando; Rakovsky, Avia; Noy, Bina; Mendelson, Ella; Levy, Itzchak
2015-11-01
HIV-1 RNA monitoring, both before and during antiretroviral therapy, is an integral part of HIV management worldwide. Measurements of HIV-1 viral loads are expected to assess the copy numbers of all common HIV-1 subtypes accurately and to be equally sensitive at different viral loads. In this study, we compared for the first time the performance of the NucliSens v2.0, RealTime HIV-1, Aptima HIV-1 Quant Dx, and Xpert HIV-1 viral load assays. Plasma samples (n = 404) were selected on the basis of their NucliSens v2.0 viral load results and HIV-1 subtypes. Concordance, linear regression, and Bland-Altman plots were assessed, and mixed-model analysis was utilized to compare the analytical performance of the assays for different HIV-1 subtypes and for low and high HIV-1 copy numbers. Overall, high concordance (>83.89%), high correlation values (Pearson r values of >0.89), and good agreement were observed among all assays, although the Xpert and Aptima assays, which provided the most similar outputs (estimated mean viral loads of 2.67 log copies/ml [95% confidence interval [CI], 2.50 to 2.84 log copies/ml] and 2.68 log copies/ml [95% CI, 2.49 to 2.86 log copies/ml], respectively), correlated best with the RealTime assay (89.8% concordance, with Pearson r values of 0.97 to 0.98). These three assays exhibited greater precision than the NucliSens v2.0 assay. All assays were equally sensitive for subtype B and AG/G samples and for samples with viral loads of 1.60 to 3.00 log copies/ml. The NucliSens v2.0 assay underestimated A1 samples and those with viral loads of >3.00 log copies/ml. The RealTime assay tended to underquantify subtype C (compared to the Xpert and Aptima assays) and subtype A1 samples. The Xpert and Aptima assays were equally efficient for detection of all subtypes and viral loads, which renders these new assays most suitable for clinical HIV laboratories. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Styszko, Katarzyna; Kupiec, Krzysztof
2016-10-01
In this study the diffusion coefficients of isoproturon, diuron and cybutryn in acrylate and silicone resin-based renders were determined. The diffusion coefficients were determined using measuring concentrations of biocides in the liquid phase after being in contact with renders for specific time intervals. The mathematical solution of the transient diffusion equation for an infinite plate contacted on one side with a limited volume of water was used to calculate the diffusion coefficient. The diffusion coefficients through the acrylate render were 8.10·10(-9) m(2) s(-1) for isoproturon, 1.96·10(-9) m(2) s(-1) for diuron and 1.53·10(-9) m(2) s(-1) for cybutryn. The results for the silicone render were lower by one order of magnitude. The compounds with a high diffusion coefficient for one polymer had likewise high values for the other polymer. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Neural Correlates of Humor Creativity
Amir, Ori; Biederman, Irving
2016-01-01
Unlike passive humor appreciation, the neural correlates of real-time humor creation have been unexplored. As a case study for creativity, humor generation uniquely affords a reliable assessment of a creative product’s quality with a clear and relatively rapid beginning and end, rendering it amenable to neuroimaging that has the potential for reflecting individual differences in expertise. Professional and amateur “improv” comedians and controls viewed New Yorker cartoon drawings while being scanned. For each drawing, they were instructed to generate either a humorous or a mundane caption. Greater comedic experience was associated with decreased activation in the striatum and medial prefrontal cortex (mPFC), but increased activation in temporal association regions (TMP). Less experienced comedians manifested greater activation of mPFC, reflecting their deliberate search through TMP association space. Professionals, by contrast, tend to reap the fruits of their spontaneous associations with reduced reliance on top-down guided search. PMID:27932965
Electromagnetic Thermography Nondestructive Evaluation: Physics-based Modeling and Pattern Mining
Gao, Bin; Woo, Wai Lok; Tian, Gui Yun
2016-01-01
Electromagnetic mechanism of Joule heating and thermal conduction on conductive material characterization broadens their scope for implementation in real thermography based Nondestructive testing and evaluation (NDT&E) systems by imparting sensitivity, conformability and allowing fast and imaging detection, which is necessary for efficiency. The issue of automatic material evaluation has not been fully addressed by researchers and it marks a crucial first step to analyzing the structural health of the material, which in turn sheds light on understanding the production of the defects mechanisms. In this study, we bridge the gap between the physics world and mathematical modeling world. We generate physics-mathematical modeling and mining route in the spatial-, time-, frequency-, and sparse-pattern domains. This is a significant step towards realizing the deeper insight in electromagnetic thermography (EMT) and automatic defect identification. This renders the EMT a promising candidate for the highly efficient and yet flexible NDT&E. PMID:27158061
Looking within the zebrafish to understand the tuberculous granuloma.
Ramakrishnan, Lalita
2013-01-01
Tuberculosis is characterized by the formation of complex immune cell aggregates called granulomas, which for nearly a century have been viewed as critical host-beneficial structures to restrict bacterial growth and spread. A different view has now emerged from real-time visualization of granuloma formation and its consequences in the optically transparent and genetically tractable zebrafish larva. Pathogenic mycobacteria have developed mechanisms to use host granulomas for their expansion and dissemination, at least during the innate phases of infection. Host processes that are intended to be beneficial-death of infected macrophages and their subsequent phagocytosis by macrophages that are newly recruited to the growing granuloma-are harnessed by mycobacteria for their own benefit. Mycobacteria can also render the granuloma a safe-haven in the more advanced stages of infection. An understanding of the host and bacterial pathways involved in tuberculous granuloma formation may suggest new ways to combat mycobacterial infection.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Campagne, F; Weinstein, H
1999-01-01
An algorithmic method for drawing residue-based schematic diagrams of proteins on a 2D page is presented and illustrated. The method allows the creation of rendering engines dedicated to a given family of sequences, or fold. The initial implementation provides an engine that can produce a 2D diagram representing secondary structure for any transmembrane protein sequence. We present the details of the strategy for automating the drawing of these diagrams. The most important part of this strategy is the development of an algorithm for laying out residues of a loop that connects to arbitrary points of a 2D plane. As implemented, this algorithm is suitable for real-time modification of the loop layout. This work is of interest for the representation and analysis of data from (1) protein databases, (2) mutagenesis results, or (3) various kinds of protein context-dependent annotations or data.
Analysis of spectra using correlation functions
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Norton, Robert H.
1988-01-01
A novel method is presented for the quantitative analysis of spectra based on the properties of the cross correlation between a real spectrum and either a numerical synthesis or laboratory simulation. A new goodness-of-fit criterion called the heteromorphic coefficient H is proposed that has the property of being zero when a fit is achieved and varying smoothly through zero as the iteration proceeds, providing a powerful tool for automatic or near-automatic analysis. It is also shown that H can be rendered substantially noise-immune, permitting the analysis of very weak spectra well below the apparent noise level and, as a byproduct, providing Doppler shift and radial velocity information with excellent precision. The technique is in regular use in the Atmospheric Trace Molecule Spectroscopy (ATMOS) project and operates in an interactive, realtime computing environment with turn-around times of a few seconds or less.
Implementation of a high-speed face recognition system that uses an optical parallel correlator.
Watanabe, Eriko; Kodate, Kashiko
2005-02-10
We implement a fully automatic fast face recognition system by using a 1000 frame/s optical parallel correlator designed and assembled by us. The operational speed for the 1:N (i.e., matching one image against N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 s, including the preprocessing and postprocessing times. The binary real-only matched filter is devised for the sake of face recognition, and the system is optimized by the false-rejection rate (FRR) and the false-acceptance rate (FAR), according to 300 samples selected by the biometrics guideline. From trial 1:N identification experiments with the optical parallel correlator, we acquired low error rates of 2.6% FRR and 1.3% FAR. Facial images of people wearing thin glasses or heavy makeup that rendered identification difficult were identified with this system.
Fractional-order in a macroeconomic dynamic model
NASA Astrophysics Data System (ADS)
David, S. A.; Quintino, D. D.; Soliani, J.
2013-10-01
In this paper, we applied the Riemann-Liouville approach in order to realize the numerical simulations to a set of equations that represent a fractional-order macroeconomic dynamic model. It is a generalization of a dynamic model recently reported in the literature. The aforementioned equations have been simulated for several cases involving integer and non-integer order analysis, with some different values to fractional order. The time histories and the phase diagrams have been plotted to visualize the effect of fractional order approach. The new contribution of this work arises from the fact that the macroeconomic dynamic model proposed here involves the public sector deficit equation, which renders the model more realistic and complete when compared with the ones encountered in the literature. The results reveal that the fractional-order macroeconomic model can exhibit a real reasonable behavior to macroeconomics systems and might offer greater insights towards the understanding of these complex dynamic systems.
Systems and methods for interactive virtual reality process control and simulation
Daniel, Jr., William E.; Whitney, Michael A.
2001-01-01
A system for visualizing, controlling and managing information includes a data analysis unit for interpreting and classifying raw data using analytical techniques. A data flow coordination unit routes data from its source to other components within the system. A data preparation unit handles the graphical preparation of the data and a data rendering unit presents the data in a three-dimensional interactive environment where the user can observe, interact with, and interpret the data. A user can view the information on various levels, from a high overall process level view, to a view illustrating linkage between variables, to view the hard data itself, or to view results of an analysis of the data. The system allows a user to monitor a physical process in real-time and further allows the user to manage and control the information in a manner not previously possible.
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
Multi-channel medical imaging system
Frangioni, John V
2013-12-31
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.
Web GIS in practice VIII: HTML5 and the canvas element for interactive online mapping
2010-01-01
HTML5 is being developed as the next major revision of HTML (Hypertext Markup Language), the core markup language of the World Wide Web. It aims at reducing the need for proprietary, plug-in-based rich Internet application (RIA) technologies such as Adobe Flash. The canvas element is part of HTML5 and is used to draw graphics using scripting (e.g., JavaScript). This paper introduces Cartagen, an open-source, vector-based, client-side framework for rendering plug-in-free, offline-capable, interactive maps in native HTML5 on a wide range of Web browsers and mobile phones. Cartagen was developed at MIT Media Lab's Design Ecology group. Potential applications of the technology as an enabler for participatory online mapping include mapping real-time air pollution, citizen reporting, and disaster response, among many other possibilities. PMID:20199681
Neilson, Peter D; Neilson, Megan D
2005-09-01
Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.
Multi-channel medical imaging system
Frangioni, John V.
2016-05-03
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.
The Neural Correlates of Humor Creativity.
Amir, Ori; Biederman, Irving
2016-01-01
Unlike passive humor appreciation, the neural correlates of real-time humor creation have been unexplored. As a case study for creativity, humor generation uniquely affords a reliable assessment of a creative product's quality with a clear and relatively rapid beginning and end, rendering it amenable to neuroimaging that has the potential for reflecting individual differences in expertise. Professional and amateur "improv" comedians and controls viewed New Yorker cartoon drawings while being scanned. For each drawing, they were instructed to generate either a humorous or a mundane caption. Greater comedic experience was associated with decreased activation in the striatum and medial prefrontal cortex (mPFC), but increased activation in temporal association regions (TMP). Less experienced comedians manifested greater activation of mPFC, reflecting their deliberate search through TMP association space. Professionals, by contrast, tend to reap the fruits of their spontaneous associations with reduced reliance on top-down guided search.
Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering
NASA Astrophysics Data System (ADS)
Jiang, Lu; Piao, Yan
2018-04-01
The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
NASA Astrophysics Data System (ADS)
Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena
2014-03-01
In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.
2D to 3D conversion implemented in different hardware
NASA Astrophysics Data System (ADS)
Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli
2015-02-01
Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.
Implementation of a virtual laryngoscope system using efficient reconstruction algorithms.
Luo, Shouhua; Yan, Yuling
2009-08-01
Conventional fiberoptic laryngoscope may cause discomfort to the patient and in some cases it can lead to side effects that include perforation, infection and hemorrhage. Virtual laryngoscopy (VL) can overcome this problem and further it may lower the risk of operation failures. Very few virtual endoscope (VE) based investigations of the larynx have been described in the literature. CT data sets from a healthy subject were used for the VL studies. An algorithm of preprocessing and region-growing for 3-D image segmentation is developed. An octree based approach is applied in our VL system which facilitates a rapid construction of iso-surfaces. Some locating techniques are used for fast rendering and navigation (fly-through). Our VL visualization system provides for real time and efficient 'fly-through' navigation. The virtual camera can be arranged so that it moves along the airway in either direction. Snap shots were taken during fly-throughs. The system can automatically adjust the direction of the virtual camera and prevent collisions of the camera and the wall of the airway. A virtual laryngoscope (VL) system using OpenGL (Open Graphics Library) platform for interactive rendering and 3D visualization of the laryngeal framework and upper airway is established. OpenGL is supported on major operating systems and works with every major windowing system. The VL system runs on regular PC workstations and was successfully tested and evaluated using CT data from a normal subject.
VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.
Yu, Bowen; Silva, Claudio T
2017-01-01
Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.
Impact of a in situ laboratory on physician expectancy.
Brulé, Romain; Sarazin, Marianne; Tayeb, Nicole; Roubille, Martine; Szymanowicz, Anton
2018-01-01
Biological examinations are essential for clinicians' medical care. The aim of this study is to assess clinicians' expectations in healthcare facilities and their perception of medical biology in different types of organization. We performed a prospective transversal study by electronic questionnaire conducted among 242 practitioners in four healthcare facilities. The aspects explored were as follows: quality, reliability, rendering time of examination results and biology platform support. Analyses were conducted after rectification of the sample by weight. Sixty one clinicians responded (25.2% [19.7-30.7]). The rendering time of examination is the main criterion mentioned with a requirement of less than one hour in case of emergency (81.5% [71.8-91.2] of the answers) to less than 72 hours for specialized examinations (81.5% [71.8-91.2] of the answers). Better collaboration with biologists is expected by clinicians (54.7% [50.9-58.5]). Satisfaction with the biology platform support and rendering time of emergency cases results was significantly (p <0.005) lower in facilities without an on-site laboratory. In conclusion, although medical biology performance is generally satisfactory within medical facilities, it remains nonetheless affected when the laboratory is not on site. The rendering time of examination, depending on the biology platform support functions and the proximity of the laboratory, remains the main criterion. Clinician-biologist collaboration, which increases of the medico-economic efficiency of patient's healthcare, appears as an essential criterion in a structural conception of medical biology.
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.
Rouabah, K; Varoquaux, A; Caporossi, J M; Louis, G; Jacquier, A; Bartoli, J M; Moulin, G; Vidal, V
2016-11-01
The purpose of this study was to assess the feasibility and utility of image fusion (Easy-TIPS) obtained from pre-procedure CT angiography and per-procedure real-time fluoroscopy for portal vein puncture during transjugular intrahepatic portosystemic shunt (TIPS) placement. Eighteen patients (15 men, 3 women) with a mean age of 63 years (range: 48-81 years; median age, 65 years) were included in the study. All patients underwent TIPS placement by two groups of radiologists (one group with radiologists of an experience<3 years and one with an experience≥3 years) using fusion imaging obtained from three-dimensional computed tomography angiography of the portal vein and real-time fluoroscopic images of the portal vein. Image fusion was used to guide the portal vein puncture during TIPS placement. At the end of the procedure, the interventional radiologists evaluated the utility of fusion imaging for portal vein puncture during TIPS placement. Mismatch between three-dimensional computed tomography angiography and real-time fluoroscopic images of the portal vein on image fusion was quantitatively analyzed. Posttreatment CT time, number of the puncture attempts, total radiation exposure and radiation from the retrograde portography were also recorded. Image fusion was considered useful for portal vein puncture in 13/18 TIPS procedures (72%). The mean posttreatment time to obtain fusion images was 16.4minutes. 3D volume rendered CT angiography images was strictly superimposed on direct portography in 10/18 procedures (56%). The mismatch mean value was 0.69cm in height and 0.28cm laterally. A mean number of 4.6 portal vein puncture attempts was made. Eight patients required less than three attempts. The mean radiation dose from retrograde portography was 421.2dGy.cm 2 , corresponding to a mean additional exposure of 19%. Fusion imaging resulting from image fusion from pre-procedural CT angiography is feasible, safe and makes portal puncture easier during TIPS placement. Copyright © 2016 Editions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Evaluation of haptic interfaces for simulation of drill vibration in virtual temporal bone surgery.
Ghasemloonia, Ahmad; Baxandall, Shalese; Zareinia, Kourosh; Lui, Justin T; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny
2016-11-01
Surgical training is evolving from an observership model towards a new paradigm that includes virtual-reality (VR) simulation. In otolaryngology, temporal bone dissection has become intimately linked with VR simulation as the complexity of anatomy demands a high level of surgeon aptitude and confidence. While an adequate 3D visualization of the surgical site is available in current simulators, the force feedback rendered during haptic interaction does not convey vibrations. This lack of vibration rendering limits the simulation fidelity of a surgical drill such as that used in temporal bone dissection. In order to develop an immersive simulation platform capable of haptic force and vibration feedback, the efficacy of hand controllers for rendering vibration in different drilling circumstances needs to be investigated. In this study, the vibration rendering ability of four different haptic hand controllers were analyzed and compared to find the best commercial haptic hand controller. A test-rig was developed to record vibrations encountered during temporal bone dissection and a software was written to render the recorded signals without adding hardware to the system. An accelerometer mounted on the end-effector of each device recorded the rendered vibration signals. The newly recorded vibration signal was compared with the input signal in both time and frequency domains by coherence and cross correlation analyses to quantitatively measure the fidelity of these devices in terms of rendering vibrotactile drilling feedback in different drilling conditions. This method can be used to assess the vibration rendering ability in VR simulation systems and selection of ideal haptic devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Randell, Rebecca; Ruddle, Roy A; Thomas, Rhys G; Mello-Thoms, Claudia; Treanor, Darren
2014-10-01
Digital pathology promises a number of benefits in efficiency in surgical pathology, yet the longer time required to review a virtual slide than a glass slide currently represents a significant barrier to the routine use of digital pathology. We aimed to create a novel workstation that enables pathologists to view a case as quickly as on the conventional microscope. The Leeds Virtual Microscope (LVM) was evaluated using a mixed factorial experimental design. Twelve consultant pathologists took part, each viewing one long cancer case (12-25 slides) on the LVM and one on a conventional microscope. Total time taken and diagnostic confidence were similar for the microscope and LVM, as was the mean slide viewing time. On the LVM, participants spent a significantly greater proportion of the total task time viewing slides and revisited slides more often. The unique design of the LVM, enabling real-time rendering of virtual slides while providing users with a quick and intuitive way to navigate within and between slides, makes use of digital pathology in routine practice a realistic possibility. With further practice with the system, diagnostic efficiency on the LVM is likely to increase yet more. Copyright © 2014 Elsevier Inc. All rights reserved.
Soft bilateral filtering volumetric shadows using cube shadow maps
Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang
2017-01-01
Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740
New service interface for River Forecasting Center derived quantitative precipitation estimates
Blodgett, David L.
2013-01-01
For more than a decade, the National Weather Service (NWS) River Forecast Centers (RFCs) have been estimating spatially distributed rainfall by applying quality-control procedures to radar-indicated rainfall estimates in the eastern United States and other best practices in the western United States to producea national Quantitative Precipitation Estimate (QPE) (National Weather Service, 2013). The availability of archives of QPE information for analytical purposes has been limited to manual requests for access to raw binary file formats that are difficult for scientists who are not in the climatic sciences to work with. The NWS provided the QPE archives to the U.S. Geological Survey (USGS), and the contents of the real-time feed from the RFCs are being saved by the USGS for incorporation into the archives. The USGS has applied time-series aggregation and added latitude-longitude coordinate variables to publish the RFC QPE data. Web services provide users with direct (index-based) data access, rendered visualizations of the data, and resampled raster representations of the source data in common geographic information formats.
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
Alignment and position visualization methods for the biomedical imaging and therapy (BMIT) MRT lift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bree, Michael, E-mail: michael.bree@lightsource.ca; Miller, Denise; Kerr, Graham
The Microbeam Radiation Therapy (MRT) Lift is an eight stage positioning and scanning system at the Canadian Light Source’s BMIT Facility. Alignment of the sample with the beam using the MRT Lift is a time consuming and challenging task. The BMIT Group has developed a Python-based MRT Lift positioning and control program that uses a combination of computational and iterative methods to independently adjust the sample’s X, Y, Z, pitch and roll positions. The program offers “1-Click” alignment of the sample to the beam. Use of a wireframe visualization technique enables even minute movements to be illustrated. Proposed movements andmore » the resulting MRT Lift position can be manually verified before being applied. Optional integration with the SolidWorks modelling platform allows high quality renderings of the MRT Lift in its current or proposed position to be displayed in real time. Human factors principles are incorporated into the program with the objective of delivering easy to use controls for this complex device.« less
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Enhancements to VTK enabling Scientific Visualization in Immersive Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish
Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR)more » environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.« less
Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.
Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael
2014-09-01
In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.
A Reduced Order Model for Whole-Chip Thermal Analysis of Microfluidic Lab-on-a-Chip Systems
Wang, Yi; Song, Hongjun; Pant, Kapil
2013-01-01
This paper presents a Krylov subspace projection-based Reduced Order Model (ROM) for whole microfluidic chip thermal analysis, including conjugate heat transfer. Two key steps in the reduced order modeling procedure are described in detail, including (1) the acquisition of a 3D full-scale computational model in the state-space form to capture the dynamic thermal behavior of the entire microfluidic chip; and (2) the model order reduction using the Block Arnoldi algorithm to markedly lower the dimension of the full-scale model. Case studies using practically relevant thermal microfluidic chip are undertaken to establish the capability and to evaluate the computational performance of the reduced order modeling technique. The ROM is compared against the full-scale model and exhibits good agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) and over three orders-of-magnitude acceleration in computational speed. The salient model reusability and real-time simulation capability renders it amenable for operational optimization and in-line thermal control and management of microfluidic systems and devices. PMID:24443647
Gustavell, Tina; Langius-Eklöf, Ann; Wengström, Yvonne; Segersvärd, Ralf; Sundberg, Kay
2018-03-27
Patients who have undergone pancreaticoduodenectomy because of pancreatic cancer experience distressing symptoms and unmet supportive care needs after discharge. To meet these needs, we have developed a mobile health app (Interaktor) for daily assessment of symptoms and access to self-care advice that includes a risk assessment model for alerts with real-time interactions with professionals. The study aim was to develop and test a version of the Interaktor app adapted for patients who have undergone pancreaticoduodenectomy. The app was developed and tested for feasibility in 6 patients during 4 weeks. One nurse monitored and responded to alerts. Logged data from the app were collected, and all participants were interviewed about their experiences. Adherence to reporting daily was 84%. Alerts were generated in 41% of the reports. The patients felt reassured and cared for and received support for symptom management. The app was easy to use, had relevant content, and had few technical problems, although suggestions for improvement were given. The daily reporting of symptoms and having access to a nurse in real time in the case of an alarming symptom seem to enhance symptom management and render a feeling of security in patients. Some modifications of the app are needed before use in a larger sample. Daily reporting of symptoms after pancreaticoduodenectomy enhances symptom management, self-care, and participation without being a burden to patients, indicating that mobile health can be used in clinical practice by patients with poor prognosis who experience severe symptoms.
NASA Astrophysics Data System (ADS)
Yu, K. C.; Champlin, D. M.; Goldsworth, D. A.; Raynolds, R. G.; Dechesne, M.
2011-09-01
Digital Earth visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. At the Denver Museum of Nature & Science (DMNS), we have used such visualization technologies, including real-time virtual reality software running in the immersive digital "fulldome" Gates Planetarium, to impact the community through topical policy presentations. DMNS public lectures have covered regional issues like water resources, as well as global topics such as earthquakes, tsunamis, and resource depletion. The Gates Planetarium allows an audience to have an immersive experience-similar to virtual reality "CAVE" environments found in academia-that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend dynamically changing geospatial datasets in an exciting and engaging fashion. Surveys and interviews show that these talks are effective in heightening visitor interest in the subjects weeks or months after the presentation. Many visitors take additional steps to learn more, while one was so inspired that she actively worked to bring the same programming to her children's school. These preliminary findings suggest that fulldome real-time visualizations can have a substantial long-term impact on an audience's engagement and interest in science topics.
SYNAISTHISI: an IoT-powered smart visitor management and cognitive recommendations system
NASA Astrophysics Data System (ADS)
Thanos, Giorgos Konstandinos; Karafylli, Christina; Karafylli, Maria; Zacharakis, Dimitris; Papadimitriou, Apostolis; Dimitros, Kostantinos; Kanellopoulou, Konstantina; Kyriazanos, Dimitris M.; Thomopoulos, Stelios C. A.
2016-05-01
Location-based and navigation services are really needed to help visitors and audience of big events, complex buildings, shopping malls, airports and large companies. However, the lack of GPS and proper mapping indoors usually renders location-based applications and services useless or simply not applicable in such environments. SYNAISTHISI introduces a mobile application for smartphones which offers navigation capabilities outside and inside buildings and through multiple floor levels. The application comes together with a suite of helpful services, including personalized recommendations, visit/event management and a helpful search functionality in order to navigate to a specific location, event or person. As the user finds his way towards his destination, NFC-enabled checkpoints and bluetooth beacons assist him, while offering re-routing, check-in/out capabilities and useful information about ongoing meetings and nearby events. The application is supported by a back-end GIS system which can provide a broad and clear view to event organizers, campus managers and field personnel for purposes of event logistics, safety and security. SYNAISTHISI system comes with plenty competitive advantages including (a) Seamless Navigation as users move between outdoor and indoor areas and different floor levels by using innovative routing algorithms, (b) connection to and powered by IoT platform, for localization and real-time information feedback, (c) dynamic personalized recommendations based on user profile, location and real-time information provided by the IoT platform and (d) Indoor localization without the need for expensive infrastructure and installations.
NASA Astrophysics Data System (ADS)
Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, A. C. M.; Fonseca, Jaime C.; Lima, Estevão.; Vilaça, João. L.
2014-03-01
Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.
Interactive investigations into planetary interiors
NASA Astrophysics Data System (ADS)
Rose, I.
2015-12-01
Many processes in Earth science are difficult to observe or visualize due to the large timescales and lengthscales over which they operate. The dynamics of planetary mantles are particularly challenging as we cannot even look at the rocks involved. As a result, much teaching material on mantle dynamics relies on static images and cartoons, many of which are decades old. Recent improvements in computing power and technology (largely driven by game and web development) have allowed for advances in real-time physics simulations and visualizations, but these have been slow to affect Earth science education.Here I demonstrate a teaching tool for mantle convection and seismology which solves the equations for conservation of mass, momentum, and energy in real time, allowing users make changes to the simulation and immediately see the effects. The user can ask and answer questions about what happens when they add heat in one place, or take it away from another place, or increase the temperature at the base of the mantle. They can also pause the simulation, and while it is paused, create and visualize seismic waves traveling through the mantle. These allow for investigations into and discussions about plate tectonics, earthquakes, hot spot volcanism, and planetary cooling.The simulation is rendered to the screen using OpenGL, and is cross-platform. It can be run as a native application for maximum performance, but it can also be embedded in a web browser for easy deployment and portability.
Jayapandian, Catherine P; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D; Zhang, Guo-Qiang; Sahoo, Satya S
2013-01-01
Epilepsy is the most common serious neurological disorder affecting 50-60 million persons worldwide. Multi-modal electrophysiological data, such as electroencephalography (EEG) and electrocardiography (EKG), are central to effective patient care and clinical research in epilepsy. Electrophysiological data is an example of clinical "big data" consisting of more than 100 multi-channel signals with recordings from each patient generating 5-10GB of data. Current approaches to store and analyze signal data using standalone tools, such as Nihon Kohden neurology software, are inadequate to meet the growing volume of data and the need for supporting multi-center collaborative studies with real time and interactive access. We introduce the Cloudwave platform in this paper that features a Web-based intuitive signal analysis interface integrated with a Hadoop-based data processing module implemented on clinical data stored in a "private cloud". Cloudwave has been developed as part of the National Institute of Neurological Disorders and Strokes (NINDS) funded multi-center Prevention and Risk Identification of SUDEP Mortality (PRISM) project. The Cloudwave visualization interface provides real-time rendering of multi-modal signals with "montages" for EEG feature characterization over 2TB of patient data generated at the Case University Hospital Epilepsy Monitoring Unit. Results from performance evaluation of the Cloudwave Hadoop data processing module demonstrate one order of magnitude improvement in performance over 77GB of patient data. (Cloudwave project: http://prism.case.edu/prism/index.php/Cloudwave).
Jayapandian, Catherine P.; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D.; Zhang, Guo-Qiang; Sahoo, Satya S.
2013-01-01
Epilepsy is the most common serious neurological disorder affecting 50–60 million persons worldwide. Multi-modal electrophysiological data, such as electroencephalography (EEG) and electrocardiography (EKG), are central to effective patient care and clinical research in epilepsy. Electrophysiological data is an example of clinical “big data” consisting of more than 100 multi-channel signals with recordings from each patient generating 5–10GB of data. Current approaches to store and analyze signal data using standalone tools, such as Nihon Kohden neurology software, are inadequate to meet the growing volume of data and the need for supporting multi-center collaborative studies with real time and interactive access. We introduce the Cloudwave platform in this paper that features a Web-based intuitive signal analysis interface integrated with a Hadoop-based data processing module implemented on clinical data stored in a “private cloud”. Cloudwave has been developed as part of the National Institute of Neurological Disorders and Strokes (NINDS) funded multi-center Prevention and Risk Identification of SUDEP Mortality (PRISM) project. The Cloudwave visualization interface provides real-time rendering of multi-modal signals with “montages” for EEG feature characterization over 2TB of patient data generated at the Case University Hospital Epilepsy Monitoring Unit. Results from performance evaluation of the Cloudwave Hadoop data processing module demonstrate one order of magnitude improvement in performance over 77GB of patient data. (Cloudwave project: http://prism.case.edu/prism/index.php/Cloudwave) PMID:24551370
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
THE BUDGET, THE FAMILY PHYSICIAN AND THE PATIENT A DIFFERENT APPROACH.
Keidar, Ilan
2014-10-01
Modern medicine offers better health and longer life expectancy, yet consumes huge budgets. The Israeli Health Insurance Law (IHIL) regulates the delivery of health services to all Israel's residents through Health Maintenance Organizations (HMOS). These organizations confront constant struggles with their budgets and have to reduce expenses as much as possible (without harming the health level rendered). In the constant necessity to restrain expenses are embedded difficulties that might cause unrest to the Public Health System (PHS). At the edge of the public health spear are the Family Physicians (FPS) who have not only to supply the best possible medicine to their patients, and at the same time confront budgetary constraints that have the potential to hinder the level of rendered medicine, but also have, at times, to mediate between the conflicting interests of their patients' wish to receive the best available health measures, the FP own medical believes, the HMO's directives and, between the FPS wish, to keep up the number of their (content) patients. One of the World Health Organization's (WHO) concepts, like Israel's one, is that soul and body are inseparable and must be addressed simultaneously in the process of healing. Real life at the Family Physicians' clinics shows, at times, that despite the Israeli physicians' very high professional level, and the Israeli Health System (IHS)'s high efficiency, such a process, due to budgetary constraints, workload and various other reasons, does not always take place and, on the personal level, there are patients who feel that the "system" has treated them incorrectly and in an unsatisfactory manner. Unsatisfied and restless patients might cause undesirable consequences to the "system", like losing faith in one's FP and HMO, which might lead to the patients' reduced cooperation in the healing process, lawsuits amplification, patients leaving their FPS and their HMOs etc. Addressing the patient's soul and body as an inseparable unit would help harnessing the patients' will and mental resources to the healing process. A content and satisfied patient tends to less argue with his/her FP, to be more satisfied with the rendered health services, to be more grateful for the provided services, to be healthier (as some would argue), to be more motivated and committed to the healing process, thus, all parties would benefit, emotionally, physically and from the budgetary point of view. This article would like to offer a "simple" 'yet very effective way' to keep the HMO's members feel that they are not alone at their sensitive time of illness, that behind the FP's desk sits not only a professional figure, but also a "human being" who is there to support their "souls" as well, thus, harnessing the patients to the healing process, reducing the patients' possible discontent from the HMOS's ways of rendering services including, at times, in cases when the required or desired medical measures is not received. The correct use of Interpersonal Communication (IPC) by the FP encompasses tremendous opportunities to achieve the desired connection between soul and body during the healing process, and offers many personal, budgetary and other benefits to the patient, FP, HMO and the health system at large. Its use is almost costless, its contribution to the health system is huge and, last, but not least, it can easily be learned and adopted by FPS in their daily work with their patients and the system.
Video-Game-Like Engine for Depicting Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Upchurch, Paul R.
2009-01-01
GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.
Lamb wave detection of limpet mines on ship hulls.
Bingham, Jill; Hinders, Mark; Friedman, Adam
2009-12-01
This paper describes the use of ultrasonic guided waves for identifying the mass loading due to underwater limpet mines on ship hulls. The Dynamic Wavelet Fingerprint Technique (DFWT) is used to render the guided wave mode information in two-dimensional binary images because the waveform features of interest are too subtle to identify in time domain. The use of wavelets allows both time and scale features from the original signals to be retained, and image processing can be used to automatically extract features that correspond to the arrival times of the guided wave modes. For further understanding of how the guided wave modes propagate through the real structures, a parallel processing, 3D elastic wave simulation is developed using the finite integration technique (EFIT). This full field, technique models situations that are too complex for analytical solutions, such as built up 3D structures. The simulations have produced informative visualizations of the guided wave modes in the structures as well as mimicking directly the output from sensors placed in the simulation space for direct comparison to experiments. Results from both drydock and in-water experiments with dummy mines are also shown.
A novel modular ANN architecture for efficient monitoring of gases/odours in real-time
NASA Astrophysics Data System (ADS)
Mishra, A.; Rajput, N. S.
2018-04-01
Data pre-processing is tremendously used for enhanced classification of gases. However, it suppresses the concentration variances of different gas samples. A classical solution of using single artificial neural network (ANN) architecture is also inefficient and renders degraded quantification. In this paper, a novel modular ANN design has been proposed to provide an efficient and scalable solution in real–time. Here, two separate ANN blocks viz. classifier block and quantifier block have been used to provide efficient and scalable gas monitoring in real—time. The classifier ANN consists of two stages. In the first stage, the Net 1-NDSRT has been trained to transform raw sensor responses into corresponding virtual multi-sensor responses using normalized difference sensor response transformation (NDSRT). These responses have been fed to the second stage (i.e., Net 2-classifier ). The Net 2-classifier has been trained to classify various gas samples to their respective class. Further, the quantifier block has parallel ANN modules, multiplexed to quantify each gas. Therefore, the classifier ANN decides class and quantifier ANN decides the exact quantity of the gas/odor present in the respective sample of that class.
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
LED Light Characteristics for Surgical Shadowless Lamps and Surgical Loupes
Kinugawa, Yoshitaka; Nobae, Yuichi; Suzuki, Toshihiro; Tanaka, Yoshiyuki; Toda, Ikuko; Tsubota, Kazuo
2015-01-01
Background: Blue light has more energy than longer wavelength light and can penetrate the eye to reach the retina. When surgeons use magnifying loupes under intensive surgical shadowless lamps for better view of the surgical field, the total luminance is about 200 times brighter than that of typical office lighting. In this study, the effects of 2 types of shadowless lamps were compared. Moreover, the effect of various eyeglasses, which support magnifying loupes, on both the light energy and color rendering was considered. Methods: The light intensity and color rendering were measured on 3 variables: light transmittance, light intensity, and color rendering. Results: Under shadowless lamps, the light energy increased with low-magnification loupes and decreased with high-magnification loupes. Filtering eyeglasses reduced the energy, especially in conditions where the low-magnification loupe was used. The best color-rendering index values were obtained with computer eyeglasses under conventional light-emitting diode shadowless lamps and with no glass and with lightly yellow-tinted lenses under less-blue light-emitting diode. Conclusions: Microsurgeons are exposed to strong lighting throughout their career, and proper color rendering must be considered for easier recognition. Light toxicity and loss of color rendering can be reduced with an appropriate combination of shadowless lamps and colored eyeglasses. PMID:26893987
a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application
NASA Astrophysics Data System (ADS)
Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.
2017-11-01
Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.
2014-01-01
Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.
Attempted suicide: an ethological perspective.
Goldney, R D
1980-01-01
Attempted suicides are complex phenomena. Examination of the observed and described behavior permits an analogy to be drawn between them and the biological concept of "conservation withdrawal." This cognizance of innate of innate responses, individual differences and changing socio-cultural standards. Such a formulation more readily allows a non-judgemental approach to the patient, with acceptance of both the appeal and wish to die components. In addition, it offers a different view of the nosological debate, rendering the differences more apparent than real.
The "erotic transference": some technical and countertransferential difficulties.
Book, H E
1995-01-01
This paper highlights dynamics that may interfere with the therapist's identifying and addressing the erotic transference: (1) deficient training; (2) theoretical orientations that devalue the transference while espousing a "real" relationship including self-disclosure; (3) countertransference responses to the erotic transference; and (4) clinical errors of focusing on the manifest erotic transference while overlooking significant but latent pre-oedipal, oedipal, aggressive, or selfobject issues. Inattention to these dynamics may render the therapist vulnerable to sexual acting out with his patient.
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
Remote volume rendering pipeline for mHealth applications
NASA Astrophysics Data System (ADS)
Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald
2014-03-01
We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
Multifunctional millimeter-wave radar system for helicopter safety
NASA Astrophysics Data System (ADS)
Goshi, Darren S.; Case, Timothy J.; McKitterick, John B.; Bui, Long Q.
2012-06-01
A multi-featured sensor solution has been developed that enhances the operational safety and functionality of small airborne platforms, representing an invaluable stride toward enabling higher-risk, tactical missions. This paper demonstrates results from a recently developed multi-functional sensor system that integrates a high performance millimeter-wave radar front end, an evidence grid-based integration processing scheme, and the incorporation into a 3D Synthetic Vision System (SVS) display. The front end architecture consists of a w-band real-beam scanning radar that generates a high resolution real-time radar map and operates with an adaptable antenna architecture currently configured with an interferometric capability for target height estimation. The raw sensor data is further processed within an evidence grid-based integration functionality that results in high-resolution maps in the region surrounding the platform. Lastly, the accumulated radar results are displayed in a fully rendered 3D SVS environment integrated with local database information to provide the best representation of the surrounding environment. The integrated system concept will be discussed and initial results from an experimental flight test of this developmental system will be presented. Specifically, the forward-looking operation of the system demonstrates the system's ability to produce high precision terrain mapping with obstacle detection and avoidance capability, showcasing the system's versatility in a true operational environment.
Visualizing 3D data obtained from microscopy on the Internet.
Pittet, J J; Henn, C; Engel, A; Heymann, J B
1999-01-01
The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.
PACS-based interface for 3D anatomical structure visualization and surgical planning
NASA Astrophysics Data System (ADS)
Koehl, Christophe; Soler, Luc; Marescaux, Jacques
2002-05-01
The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.
NASA Technical Reports Server (NTRS)
Apodaca, Tony; Porter, Tom
1989-01-01
The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.
Zhang, Yaxin; He, Xin; Zeng, Guangming; Chen, Tan; Zhou, Zeyu; Wang, Hongtao; Lu, Wenjing
2015-11-01
The photodegradation of pentachlorophenol (PCP) in a surfactant-containing (single and mixed) complex system using graphene-TiO2 (GT) as catalyst was investigated. The objective was to better understand the behavior of surfactants in a GT catalysis system for its possible use in remediation technology of soil contaminated by hydrophobic organic compounds (HOCs). In a single-surfactant system, surfactant molecules aggregated on GT via hydrogen bonding and electrostatic force; nonideal mixing between nonionic and anionic surfactants rendered GT surface with mixed admicelles in a mixed surfactant system. Both effects helped incorporating PCP molecules into surfactant aggregates on catalyst surface. Hence, the targeted pollutants were rendered easily available to photo-yielded oxidative radicals, and photodegradation efficiency was significantly enhanced. Finally, real soil washing-photocatalysis trials proved that anionic-nonionic mixed surfactant soil washing coupled with graphene-TiO2 photocatalysis can be one promising technology for HOC-polluted soil remediation.
Computer 3D site model generation based on aerial images
NASA Astrophysics Data System (ADS)
Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.
1997-07-01
The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.
3D in the Fast Lane: Render as You Go with the Latest OpenGL Boards.
ERIC Educational Resources Information Center
Sauer, Jeff; Murphy, Sam
1997-01-01
NT OpenGL hardware allows modelers and animators to work at relatively inexpensive NT workstations in their own offices or homes previous to shared space and workstation time in expensive studios. Rates seven OpenGL boards and two QuickDraw 3D accelerator boards for Mac users on overall value, wireframe and texture rendering, 2D acceleration, and…
Interactive Web-based Visualization of Atomic Position-time Series Data
NASA Astrophysics Data System (ADS)
Thapa, S.; Karki, B. B.
2017-12-01
Extracting and interpreting the information contained in large sets of time-varying three dimensional positional data for the constituent atoms of simulated material is a challenging task. We have recently implemented a web-based visualization system to analyze the position-time series data extracted from the local or remote hosts. It involves a pre-processing step for data reduction, which involves skipping uninteresting parts of the data uniformly (at full atomic configuration level) or non-uniformly (at atomic species level or individual atom level). Atomic configuration snapshot is rendered using the ball-stick representation and can be animated by rendering successive configurations. The entire atomic dynamics can be captured as the trajectories by rendering the atomic positions at all time steps together as points. The trajectories can be manipulated at both species and atomic levels so that we can focus on one or more trajectories of interest, and can be also superimposed with the instantaneous atomic structure. The implementation was done using WebGL and Three.js for graphical rendering, HTML5 and Javascript for GUI, and Elasticsearch and JSON for data storage and retrieval within the Grails Framework. We have applied our visualization system to the simulation datatsets for proton-bearing forsterite (Mg2SiO4) - an abundant mineral of Earths upper mantle. Visualization reveals that protons (hydrogen ions) incorporated as interstitials are much more mobile than protons substituting the host Mg and Si cation sites. The proton diffusion appears to be anisotropic with high mobility along the x-direction, showing limited discrete jumps in other two directions.
Assessment of synthetic image fidelity
NASA Astrophysics Data System (ADS)
Mitchell, Kevin D.; Moorhead, Ian R.; Gilmore, Marilyn A.; Watson, Graham H.; Thomson, Mitch; Yates, T.; Troscianko, Tomasz; Tolhurst, David J.
2000-07-01
Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-11-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.
Surgical operation using lighting goggle composed of white LED arrays
NASA Astrophysics Data System (ADS)
Shimada, Junichi; Kawakami, Yoichi; Fujita, Shigeo
2001-12-01
Everywhere in the world, the highest quality and quantity of lighting is required during the surgical operations. However, the surgical approach has had many types and various angles, common ceiling surgical halogen lighting system cannot provide an adequate amount of beams because the surgeons' heads hinder the illuminations from reaching the operation field. Here, we newly design surgical lighting system composed of white LEDs equipped on both sides of goggles, which controls the lighting beams to the gazing point. With this system, it is just needed for surgeons to wear light plastic goggles with high quality LEDs made by Nichia. In fact, we have succeeded in the first internal shunt operation in the left forearm using the surgical LED lighting system on 11th Sept 2000. The electrical power for the system was supplied from lithium-ion battery for 2 hours. Since the white LEDs used were composed of InGaN-blue-emitters and YAG-yellow-phosphors, the color rendering property was not sufficient in the reddish colors. Therefore, in the next approach, it is very important to develop the spectral distribution of white LED to render inherent color of raw flesh such as skin, blood, fat tissue and internal organs. To improve the color rendering in red colors, some adjustments should be given in the fluorescents layers. Design of goggle is also very important for cutting into the real practical market of white LEDs.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-01-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
NASA Astrophysics Data System (ADS)
Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.
2010-12-01
The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.
Augmented reality in laparoscopic surgical oncology.
Nicolau, Stéphane; Soler, Luc; Mutter, Didier; Marescaux, Jacques
2011-09-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room. Copyright © 2011 Elsevier Ltd. All rights reserved.
Changing the color of textiles with realistic visual rendering
NASA Astrophysics Data System (ADS)
Hébert, Mathieu; Henckens, Lambert; Barbier, Justine; Leboulleux, Lucie; Page, Marine; Roujas, Lucie; Cazier, Anthony
2015-03-01
Fast and easy preview of a fabric without having to produce samples would be very profitable for textile designers, but remains a technological challenge. As a first step towards this objective, we study the possibility of making images of a real sample, and changing virtually the colors of its yarns while preserving the shine and shadow texture. We consider two types of fabrics: Jacquard weave fabrics made of polyester warp and weft yarns of different colors, and satin ribbons made of polyester and metallic yarns. For the Jacquard fabric, we make a color picture with a scanner on a sample in which the yarns have contrasted colors, threshold this image in order to distinguish the pixels corresponding to each yarn, and accordingly modify their hue and chroma values. This method is simple to operate but do not enable to simulate the angle-dependent shine. A second method, tested on the satin ribbon made of black polyester and achromatic metallic yarns, is based on polarized imaging. We analyze the polarization state of the reflected light which is different for dielectric and metallic materials illuminated by polarized light. We then add a fixed color value to the pixels representing the polyester yarns and modify the hue and chroma of the pixels representing the metallic yarns. This was performed for many incident angles of light, in order to render the twinkling effect displayed by these ribbons. We could verify through a few samples that the simulated previews reproduce real pictures with visually acceptable accuracy.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Karst spring real time monitoring to identify the groundwater circulation in the feeding aquifer
NASA Astrophysics Data System (ADS)
Sappa, Giuseppe; Ferranti, Flavia; De Filippi, Francesco M.; Cardilo, Giulia
2017-04-01
About one quarter of the world's population is largely or entirely dependent on groundwater from karst systems. However, karst aquifers have specific hydraulic and hydrogeological characteristics that render them highly vulnerable to pollution from human activities. Intrinsic vulnerability of a karst aquifer takes account of the inherent geological, hydrological and hydrogeological characteristics of an area; however, it is independent of the nature of contaminants. Groundwater circulation is usually very rapidly as a function of high rainfall input. As consequence, in case of contamination, these systems are characterized by limited attenuation processes in the unsaturated zone. The analysis of karst spring responses to rainfall events, at the catchment scale, is one promising approach for groundwater flow characterization. Karst springs are, in fact, an important source of information in order to understand the circulation characteristics in such complex systems. The karst Pertuso Spring, located in the Upper Valley of Aniene River, is the main outlet of a large aquifer which is one of the most important water resource in the southeast part of Latium Region, Central Italy, used for drinking, agriculture and hydroelectric supplies. This paper deals with Pertuso Spring data collected from December 2014 to March 2016, using a multiparametric probe which directly interfaces with a data logger for real-time recording of hourly data. This instrument simultaneously measures up to 6 parameters (pH, groundwater level, temperature, electric conductivity, redox and dissolved oxygen). In particular, water temperature, electrical conductivity and rainfall data coming from meteorological stations have been studied to identify the groundwater circulation in the aquifer feeding Pertuso Spring. The study of the lag time between peak rainfall and peak of T and EC allow to highlight the seasonal vulnerability of this aquifer.
Skarecky, Douglas; Yu, Hon; Linehan, Jennifer; Morales, Blanca; Su, Min-Ying; Fwu, Peter; Ahlering, Thomas
2017-10-01
To study the combination of thermal magnetic resonance imaging (MRI) and novel hypothermic cooling, via an endorectal cooling balloon (ECB), to assess the effective dispersion and temperature drop in pelvic tissue to potentially reduce inflammatory cascade in surgical applications. Three male subjects, before undergoing robot-assisted radical prostatectomy, were cooled via an ECB, rendered MRI compatible for patient safety before ECB hypothermia. MRI studies were performed using a 3T scanner and included T2-weighted anatomic scan for the pelvic structures, followed by a temperature mapping scan. The sequence was performed repeatedly during the cooling experiment, whereas the phase data were collected using an integrated MR-high-intensity focused ultrasound workstation in real time. Pelvic cooling was instituted with a cooling console located outside the MRI magnet room. The feasibility of pelvic cooling measured a temperature drop of the ECB of 20-25 degrees in real time was achieved after an initial time delay of 10-15 seconds for the ECB to cool. The thermal MRI anatomic images of the prostate and neurovascular bundle demonstrate cooling at this interface to be 10-15 degrees, and also that cooling extends into the prostate itself ~5 degrees, and disperses into the pelvic region as well. An MRI-compatible ECB coupled with thermal MRI is a feasible method to assess effective hypothermic diffusion and saturation to pelvic structures. By inference, hypothermia-induced rectal cooling could potentially reduce inflammation, scarring, and fistula in radical prostatectomy, as well as other urologic tissue procedures of high-intensity focused ultrasound, external beam radiation therapy, radioactive seed implants, transurethral microwave therapy, and transurethral resection of the prostate. Copyright © 2017 Elsevier Inc. All rights reserved.
Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.
Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N
2018-05-01
Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Vos, Winnok H., E-mail: winnok.devos@uantwerpen.be; Cell Systems and Imaging Research Group, Department of Molecular Biotechnology, Ghent University, Ghent; Beghuin, Didier
As commercial space flights have become feasible and long-term extraterrestrial missions are planned, it is imperative that the impact of space travel and the space environment on human physiology be thoroughly characterized. Scrutinizing the effects of potentially detrimental factors such as ionizing radiation and microgravity at the cellular and tissue level demands adequate visualization technology. Advanced light microscopy (ALM) is the leading tool for non-destructive structural and functional investigation of static as well as dynamic biological systems. In recent years, technological developments and advances in photochemistry and genetic engineering have boosted all aspects of resolution, readout and throughput, rendering ALMmore » ideally suited for biological space research. While various microscopy-based studies have addressed cellular response to space-related environmental stressors, biological endpoints have typically been determined only after the mission, leaving an experimental gap that is prone to bias results. An on-board, real-time microscopical monitoring device can bridge this gap. Breadboards and even fully operational microscope setups have been conceived, but they need to be rendered more compact and versatile. Most importantly, they must allow addressing the impact of gravity, or the lack thereof, on physiologically relevant biological systems in space and in ground-based simulations. In order to delineate the essential functionalities for such a system, we have reviewed the pending questions in space science, the relevant biological model systems, and the state-of-the art in ALM. Based on a rigorous trade-off, in which we recognize the relevance of multi-cellular systems and the cellular microenvironment, we propose a compact, but flexible concept for space-related cell biological research that is based on light sheet microscopy.« less
Realistic tissue visualization using photoacoustic image
NASA Astrophysics Data System (ADS)
Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong
2018-02-01
Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.
Vertex shading of the three-dimensional model based on ray-tracing algorithm
NASA Astrophysics Data System (ADS)
Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan
2016-10-01
Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.
Bifrost: a Modular Python/C++ Framework for Development of High-Throughput Data Analysis Pipelines
NASA Astrophysics Data System (ADS)
Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.
2017-01-01
Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.
Phase-detected Brillouin optical correlation-domain reflectometry
NASA Astrophysics Data System (ADS)
Mizuno, Yosuke; Hayashi, Neisei; Fukuda, Hideyuki; Nakamura, Kentaro
2018-05-01
Optical fiber sensing techniques based on Brillouin scattering have been extensively studied for structural health monitoring owing to their capability of distributed strain and temperature measurement. Although a higher signal-to-noise ratio (leading to high spatial resolution and high-speed measurement) is generally obtained for two-end-access systems, they reduce the degree of freedom in embedding the sensors into structures, and render the measurement no longer feasible when extremely high loss or breakage occurs at a point of the sensing fiber. To overcome these drawbacks, a one-end-access sensing technique called Brillouin optical correlation-domain reflectometry (BOCDR) has been developed. BOCDR has a high spatial resolution and cost efficiency, but its conventional configuration suffered from relatively low-speed operation. In this paper, we review the recently developed high-speed configurations of BOCDR, including phase-detected BOCDR, with which we demonstrate real-time distributed measurement by tracking a propagating mechanical wave. We also demonstrate breakage detection with a wide strain dynamic range.
Research and Construction Lunar Stereoscopic Visualization System Based on Chang'E Data
NASA Astrophysics Data System (ADS)
Gao, Xingye; Zeng, Xingguo; Zhang, Guihua; Zuo, Wei; Li, ChunLai
2017-04-01
With lunar exploration activities carried by Chang'E-1, Chang'E-2 and Chang'E-3 lunar probe, a large amount of lunar data has been obtained, including topographical and image data covering the whole moon, as well as the panoramic image data of the spot close to the landing point of Chang'E-3. In this paper, we constructed immersive virtual moon system based on acquired lunar exploration data by using advanced stereoscopic visualization technology, which will help scholars to carry out research on lunar topography, assist the further exploration of lunar science, and implement the facilitation of lunar science outreach to the public. In this paper, we focus on the building of lunar stereoscopic visualization system with the combination of software and hardware by using binocular stereoscopic display technology, real-time rendering algorithm for massive terrain data, and building virtual scene technology based on panorama, to achieve an immersive virtual tour of the whole moon and local moonscape of Chang'E-3 landing point.
RNA splicing regulated by RBFOX1 is essential for cardiac function in zebrafish.
Frese, Karen S; Meder, Benjamin; Keller, Andreas; Just, Steffen; Haas, Jan; Vogel, Britta; Fischer, Simon; Backes, Christina; Matzas, Mark; Köhler, Doreen; Benes, Vladimir; Katus, Hugo A; Rottbauer, Wolfgang
2015-08-15
Alternative splicing is one of the major mechanisms through which the proteomic and functional diversity of eukaryotes is achieved. However, the complex nature of the splicing machinery, its associated splicing regulators and the functional implications of alternatively spliced transcripts are only poorly understood. Here, we investigated the functional role of the splicing regulator rbfox1 in vivo using the zebrafish as a model system. We found that loss of rbfox1 led to progressive cardiac contractile dysfunction and heart failure. By using deep-transcriptome sequencing and quantitative real-time PCR, we show that depletion of rbfox1 in zebrafish results in an altered isoform expression of several crucial target genes, such as actn3a and hug. This study underlines that tightly regulated splicing is necessary for unconstrained cardiac function and renders the splicing regulator rbfox1 an interesting target for investigation in human heart failure and cardiomyopathy. © 2015. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Liao, Qin; Zhao, Wei; Zeng, Guihua; Huang, Duan
2017-08-01
The survival of Gaussian quantum states in a turbulent atmospheric channel is of crucial importance in free-space continuous-variable (CV) quantum key distribution (QKD), in which the transmission coefficient will fluctuate in time, thus resulting in non-Gaussian quantum states. Different from quantum hacking of the imperfections of practical devices, here we propose a different type of attack by exploiting the security loopholes that occur in a real lossy channel. Under a turbulent atmospheric environment, the Gaussian states are inevitably afflicted by decoherence, which would cause a degradation of the transmitted entanglement. Therefore, an eavesdropper can perform an intercept-resend attack by applying an entanglement-distillation operation on the transmitted non-Gaussian mixed states, which allows the eavesdropper to bias the estimation of the parameters and renders the final keys shared between the legitimate parties insecure. Our proposal highlights the practical CV QKD vulnerabilities with free-space quantum channels, including the satellite-to-earth links, ground-to-ground links, and a link from moving objects to ground stations.
Interactive visual exploration and analysis of origin-destination data
NASA Astrophysics Data System (ADS)
Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.
2018-05-01
In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.
Interfacing modeling suite Physics Of Eclipsing Binaries 2.0 with a Virtual Reality Platform
NASA Astrophysics Data System (ADS)
Harriett, Edward; Conroy, Kyle; Prša, Andrej; Klassner, Frank
2018-01-01
To explore alternate methods for modeling eclipsing binary stars, we extrapolate upon PHOEBE’s (PHysics Of Eclipsing BinariEs) capabilities in a virtual reality (VR) environment to create an immersive and interactive experience for users. The application used is Vizard, a python-scripted VR development platform for environments such as Cave Automatic Virtual Environment (CAVE) and other off-the-shelf VR headsets. Vizard allows the freedom for all modeling to be precompiled without compromising functionality or usage on its part. The system requires five arguments to be precomputed using PHOEBE’s python front-end: the effective temperature, flux, relative intensity, vertex coordinates, and orbits; the user can opt to implement other features from PHOEBE to be accessed within the simulation as well. Here we present the method for making the data observables accessible in real time. An Occulus Rift will be available for a live showcase of various cases of VR rendering of PHOEBE binary systems including detached and contact binary stars.
Phase-detected Brillouin optical correlation-domain reflectometry
NASA Astrophysics Data System (ADS)
Mizuno, Yosuke; Hayashi, Neisei; Fukuda, Hideyuki; Nakamura, Kentaro
2018-06-01
Optical fiber sensing techniques based on Brillouin scattering have been extensively studied for structural health monitoring owing to their capability of distributed strain and temperature measurement. Although a higher signal-to-noise ratio (leading to high spatial resolution and high-speed measurement) is generally obtained for two-end-access systems, they reduce the degree of freedom in embedding the sensors into structures, and render the measurement no longer feasible when extremely high loss or breakage occurs at a point of the sensing fiber. To overcome these drawbacks, a one-end-access sensing technique called Brillouin optical correlation-domain reflectometry (BOCDR) has been developed. BOCDR has a high spatial resolution and cost efficiency, but its conventional configuration suffered from relatively low-speed operation. In this paper, we review the recently developed high-speed configurations of BOCDR, including phase-detected BOCDR, with which we demonstrate real-time distributed measurement by tracking a propagating mechanical wave. We also demonstrate breakage detection with a wide strain dynamic range.
On the Spectrum of the Plenoptic Function.
Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike
2014-02-01
The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.
Quantum information versus black hole physics: deep firewalls from narrow assumptions
NASA Astrophysics Data System (ADS)
Braunstein, Samuel L.; Pirandola, Stefano
2018-07-01
The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes `all the way down' in contrast with earlier work describing only a structure at the horizon. This article is part of a discussion meeting issue `Foundations of quantum mechanics and their impact on contemporary society'.
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
Frangioni, John V [Wayland, MA
2012-07-24
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remains in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may also employ dyes or other fluorescent substances associated with antibodies, antibody fragments, or ligands that accumulate within a region of diagnostic significance. In one embodiment, the system provides an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide that is used to capture images. In another embodiment, the system is configured for use in open surgical procedures by providing an operating area that is closed to ambient light. More broadly, the systems described herein may be used in imaging applications where a visible light image may be usefully supplemented by an image formed from fluorescent emissions from a fluorescent substance that marks areas of functional interest.
Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco
2016-01-20
This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.
Tian, Xiaohe; Wang, Hui; Zhang, Qiong; Zhang, Mingzhu; Zhu, Yingzhong; Chen, Yan; Wu, Jieying; Tian, Yupeng
2018-05-30
Probe for dual-site target distinct subcellular compartments from cytosol and nucleus is an attractive approach, however, which was scarcely reported. Herein, a series of small-molecular thiophene pyridium salt derivatives (MitoNuc1-4) possessing water-soluble, high quantum yield and two-photon activity were rationally designed, and their structures were crystallographic confirmed. Systematic photophysical and biological imaging property investigations were carried out for them. It was found that MitoNuc1-4 exhibit two-photon absorption properties in the near infrared region, and MitoNuc1 has membrane permeability and cationic nature, rendering it to be double labelling of mitochondria and nucleolus in living cells with superb photo-stability and non-invasiveness. It also demonstrated that MitoNuc1 in living cells can monitor mitochondrial division in real time and revealed nucleolar ultrastructure under stimulated emission depletion nanoscopy. Copyright © 2017 Elsevier B.V. All rights reserved.
Mahapatra, Dwarikanath; Schueffler, Peter; Tielbeek, Jeroen A W; Buhmann, Joachim M; Vos, Franciscus M
2013-10-01
Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.
Quantum information versus black hole physics: deep firewalls from narrow assumptions.
Braunstein, Samuel L; Pirandola, Stefano
2018-07-13
The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes 'all the way down' in contrast with earlier work describing only a structure at the horizon.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
a Framework for Voxel-Based Global Scale Modeling of Urban Environments
NASA Astrophysics Data System (ADS)
Gehrung, Joachim; Hebel, Marcus; Arens, Michael; Stilla, Uwe
2016-10-01
The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.