Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H
2014-02-07
RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
Real-time volume rendering of digital medical images on an iOS device
NASA Astrophysics Data System (ADS)
Noon, Christian; Holub, Joseph; Winer, Eliot
2013-03-01
Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.
Hybrid rendering of the chest and virtual bronchoscopy [corrected].
Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D
2000-10-30
Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.
An Analysis of Scalable GPU-Based Ray-Guided Volume Rendering
Fogal, Thomas; Schiewe, Alexander; Krüger, Jens
2014-01-01
Volume rendering continues to be a critical method for analyzing large-scale scalar fields, in disciplines as diverse as biomedical engineering and computational fluid dynamics. Commodity desktop hardware has struggled to keep pace with data size increases, challenging modern visualization software to deliver responsive interactions for O(N3) algorithms such as volume rendering. We target the data type common in these domains: regularly-structured data. In this work, we demonstrate that the major limitation of most volume rendering approaches is their inability to switch the data sampling rate (and thus data size) quickly. Using a volume renderer inspired by recent work, we demonstrate that the actual amount of visualizable data for a scene is typically bound considerably lower than the memory available on a commodity GPU. Our instrumented renderer is used to investigate design decisions typically swept under the rug in volume rendering literature. The renderer is freely available, with binaries for all major platforms as well as full source code, to encourage reproduction and comparison with future research. PMID:25506079
Resolution-independent surface rendering using programmable graphics hardware
Loop, Charles T.; Blinn, James Frederick
2008-12-16
Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.
Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.
Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K
1995-07-01
The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.
Real-time photorealistic stereoscopic rendering of fire
NASA Astrophysics Data System (ADS)
Rose, Benjamin M.; McAllister, David F.
2007-02-01
We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
Real time ray tracing based on shader
NASA Astrophysics Data System (ADS)
Gui, JiangHeng; Li, Min
2017-07-01
Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.
HDlive rendering images of the fetal stomach: a preliminary report.
Inubashiri, Eisuke; Abe, Kiyotaka; Watanabe, Yukio; Akutagawa, Noriyuki; Kuroki, Katumaru; Sugawara, Masaki; Maeda, Nobuhiko; Minami, Kunihiro; Nomura, Yasuhiro
2015-01-01
This study aimed to show reconstruction of the fetal stomach using the HDlive rendering mode in ultrasound. Seventeen healthy singleton fetuses at 18-34 weeks' gestational age were observed using the HDlive rendering mode of ultrasound in utero. In all of the fetuses, we identified specific spatial structures, including macroscopic anatomical features (e.g., the pyrous, cardia, fundus, and great curvature) of the fetal stomach, using the HDlive rendering mode. In particular, HDlive rendering images showed remarkably fine details that appeared as if they were being viewed under an endoscope, with visible rugal folds after 27 weeks' gestational age. Our study suggests that the HDlive rendering mode can be used as an additional method for evaluating the fetal stomach. The HDlive rendering mode shows detailed 3D structural images and anatomically realistic images of the fetal stomach. This technique may be effective in prenatal diagnosis for examining detailed information of fetal organs.
Enhancement method for rendered images of home decoration based on SLIC superpixels
NASA Astrophysics Data System (ADS)
Dai, Yutong; Jiang, Xiaotong
2018-04-01
Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.
31 CFR 515.548 - Services rendered by Cuba to United States aircraft.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Payment to Cuba of charges for services rendered by Cuba in connection...
31 CFR 515.548 - Services rendered by Cuba to United States aircraft.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Payment to Cuba of charges for services rendered by Cuba in connection...
9 CFR 314.5 - Inedible rendered fats prepared at official establishments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Inedible rendered fats prepared at... PRODUCTS AT OFFICIAL ESTABLISHMENTS § 314.5 Inedible rendered fats prepared at official establishments. Except as provided in § 325.11(b) of this subchapter, rendered animal fat derived from condemned or other...
9 CFR 319.703 - Rendered animal fat or mixture thereof.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Rendered animal fat or mixture thereof... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Fats, Oils, Shortenings § 319.703 Rendered animal fat or mixture thereof. “Rendered Animal Fat,” or any mixture of fats...
9 CFR 314.5 - Inedible rendered fats prepared at official establishments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Inedible rendered fats prepared at... PRODUCTS AT OFFICIAL ESTABLISHMENTS § 314.5 Inedible rendered fats prepared at official establishments. Except as provided in § 325.11(b) of this subchapter, rendered animal fat derived from condemned or other...
9 CFR 314.5 - Inedible rendered fats prepared at official establishments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Inedible rendered fats prepared at... PRODUCTS AT OFFICIAL ESTABLISHMENTS § 314.5 Inedible rendered fats prepared at official establishments. Except as provided in § 325.11(b) of this subchapter, rendered animal fat derived from condemned or other...
9 CFR 319.703 - Rendered animal fat or mixture thereof.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Rendered animal fat or mixture thereof... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Fats, Oils, Shortenings § 319.703 Rendered animal fat or mixture thereof. “Rendered Animal Fat,” or any mixture of fats...
9 CFR 314.5 - Inedible rendered fats prepared at official establishments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Inedible rendered fats prepared at... PRODUCTS AT OFFICIAL ESTABLISHMENTS § 314.5 Inedible rendered fats prepared at official establishments. Except as provided in § 325.11(b) of this subchapter, rendered animal fat derived from condemned or other...
9 CFR 319.703 - Rendered animal fat or mixture thereof.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Rendered animal fat or mixture thereof... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Fats, Oils, Shortenings § 319.703 Rendered animal fat or mixture thereof. “Rendered Animal Fat,” or any mixture of fats...
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
An improved method of continuous LOD based on fractal theory in terrain rendering
NASA Astrophysics Data System (ADS)
Lin, Lan; Li, Lijun
2007-11-01
With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.
Foundations for Measuring Volume Rendering Quality
NASA Technical Reports Server (NTRS)
Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.
a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight
NASA Astrophysics Data System (ADS)
Yao, C.; Peng, G.; Song, Y.; Duan, M.
2017-09-01
The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Method and system for rendering and interacting with an adaptable computing environment
Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM
2012-06-12
An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.
Andrievskaia, Olga; Tangorra, Erin
2014-12-01
Contamination of rendered animal byproducts with central nervous system tissues (CNST) from animals with bovine spongiform encephalopathy is considered one of the vehicles of disease transmission. Removal from the animal feed chain of CNST originated from cattle of a specified age category, species-labeling of rendered meat products, and testing of rendered products for bovine CNST are tasks associated with the epidemiological control of bovine spongiform encephalopathy. A single-step TaqMan real-time reverse transcriptase (RRT) PCR assay was developed and evaluated for specific detection of bovine glial fibrillary acidic protein (GFAP) mRNA, a biomarker of bovine CNST, in rendered animal by-products. An internal amplification control, mammalian b -actin mRNA, was coamplified in the duplex RRT-PCR assay to monitor amplification efficiency, normalize amplification signals, and avoid false-negative results. The functionality of the GFAP mRNA RRT-PCR was assessed through analysis of laboratory-generated binary mixtures of bovine central nervous system (CNS) and muscle tissues treated under various thermal settings imitating industrial conditions. The assay was able to detect as low as 0.05 % (wt/wt) bovine brain tissue in binary mixtures heat treated at 110 to 130°C for 20 to 60 min. Further evaluation of the GFAP mRNA RRT-PCR assay involved samples of industrial rendered products of various species origin and composition obtained from commercial sources and rendering plants. Low amounts of bovine GFAP mRNA were detected in several bovine-rendered products, which was in agreement with declared species composition. An accurate estimation of CNS tissue content in industrial-rendered products was complicated due to a wide range of temperature and time settings in rendering protocols. Nevertheless, the GFAP mRNA RRT-PCR assay may be considered for bovine CNS tissue detection in rendered products in combination with other available tools (for example, animal age verification) in inspection programs.
Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.
Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe
2017-09-01
Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui
2018-05-01
A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
Fast algorithm for the rendering of three-dimensional surfaces
NASA Astrophysics Data System (ADS)
Pritt, Mark D.
1994-02-01
It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.
Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model
NASA Astrophysics Data System (ADS)
Wang, Yifan; Li, Weiran; Zhu, Qing
2018-04-01
This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.
Evaluating progressive-rendering algorithms in appearance design tasks.
Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio
2013-01-01
Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Ogiela, Marek R.
2012-10-01
The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.
Quantitative Evaluation of a Planetary Renderer for Terrain Relative Navigation
NASA Astrophysics Data System (ADS)
Amoroso, E.; Jones, H.; Otten, N.; Wettergreen, D.; Whittaker, W.
2016-11-01
A ray-tracing computer renderer tool is presented based on LOLA and LROC elevation models and is quantitatively compared to LRO WAC and NAC images for photometric accuracy. We investigated using rendered images for terrain relative navigation.
The physics of volume rendering
NASA Astrophysics Data System (ADS)
Peters, Thomas
2014-11-01
Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.
Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine
NASA Astrophysics Data System (ADS)
Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.
2017-12-01
Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.
Desorption of biocides from renders modified with acrylate and silicone.
Styszko, Katarzyna; Bollmann, Ulla E; Wangler, Timothy P; Bester, Kai
2014-01-01
Biocides are used in the building industry to prevent algal, bacterial and fungal growth on polymericrenders and thus to protect buildings. However, these biocides are leached into the environment. To better understand this leaching, the sorption/desorption of biocides in polymeric renders was assessed. In this study the desorption constants of cybutryn, carbendazim, iodocarb, isoproturon, diuron, dichloro-N-octylisothiazolinone and tebuconazole towards acrylate and silicone based renders were assessed at different pH values. At pH 9.5 (porewater) the constants for an acrylate based render varied between 8 (isoproturon) and 9634 (iodocarb) and 3750 (dichloro-N-octylisothiazolinone), respectively. The values changed drastically with pH value. The results for the silicone based renders were in a similar range and usually the compounds with high sorption constants for one polymer also had high values for the other polymer. Comparison of the octanol water partitioning constants (Kow) with the render/water partitioning constants (Kd) revealed similarities, but no strong correlation. Adding higher amounts of polymer to the render material changed the equilibria for dichloro-N-octylisothiazolinone, tebuconazole, cybutryn, carbendazim but not for isoproturon and diuron. Copyright © 2013 Elsevier Ltd. All rights reserved.
a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine
NASA Astrophysics Data System (ADS)
Dai, X.; Xiong, H.; Zheng, X.
2012-07-01
A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.
1. Photocopy of early 20th century rendering showing aerial veiw, ...
1. Photocopy of early 20th century rendering showing aerial veiw, looking south. Rendering owned by the Crawford Auto-Aviation Museum, 10825 East Blvd., Cleveland, Ohio. - Peerless Motor Car Company, East Ninety-third Street & Quincy Avenue, Cleveland, Cuyahoga County, OH
PRISM: An open source framework for the interactive design of GPU volume rendering shaders.
Drouin, Simon; Collins, D Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders
Collins, D. Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069
Processing-in-Memory Enabled Graphics Processors for 3D Rendering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Chenhao; Song, Shuaiwen; Wang, Jing
2017-02-06
The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less
A unified framework for building high performance DVEs
NASA Astrophysics Data System (ADS)
Lei, Kaibin; Ma, Zhixia; Xiong, Hua
2011-10-01
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Elasticity-based three dimensional ultrasound real-time volume rendering
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.
2009-02-01
Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.
49 CFR 178.815 - Stacking test.
Code of Federal Regulations, 2013 CFR
2013-10-01
... deformation, which renders the IBC unsafe for transportation, and no loss of contents. (2) For fiberboard and wooden IBCs, there may be no loss of contents and no permanent deformation, which renders the whole IBC..., which renders the IBC unsafe for transportation, and no loss of contents. (4) For the dynamic...
49 CFR 178.815 - Stacking test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... deformation, which renders the IBC unsafe for transportation, and no loss of contents. (2) For fiberboard and wooden IBCs, there may be no loss of contents and no permanent deformation, which renders the whole IBC..., which renders the IBC unsafe for transportation, and no loss of contents. (4) For the dynamic...
49 CFR 178.815 - Stacking test.
Code of Federal Regulations, 2012 CFR
2012-10-01
... deformation, which renders the IBC unsafe for transportation, and no loss of contents. (2) For fiberboard and wooden IBCs, there may be no loss of contents and no permanent deformation, which renders the whole IBC..., which renders the IBC unsafe for transportation, and no loss of contents. (4) For the dynamic...
49 CFR 178.815 - Stacking test.
Code of Federal Regulations, 2014 CFR
2014-10-01
... deformation, which renders the IBC unsafe for transportation, and no loss of contents. (2) For fiberboard and wooden IBCs, there may be no loss of contents and no permanent deformation, which renders the whole IBC..., which renders the IBC unsafe for transportation, and no loss of contents. (4) For the dynamic...
7 CFR 53.17 - Advance information concerning service rendered.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Advance information concerning service rendered. 53.17... concerning service rendered. Upon request of any applicant, all or any part of the contents of any certificate issued to him under the regulations, or other notification concerning the determination of class...
7 CFR 54.15 - Advance information concerning service rendered.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Advance information concerning service rendered. 54.15... Service § 54.15 Advance information concerning service rendered. Upon request of any applicant, all or any... concerning the determination of class, grade, other quality, or compliance of products for such applicant may...
Multi- and hyperspectral scene modeling
NASA Astrophysics Data System (ADS)
Borel, Christoph C.; Tuttle, Ronald F.
2011-06-01
This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.
Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp.
Bayr, S; Ojanperä, M; Kaparaju, P; Rintala, J
2014-10-01
In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55°C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH4-N and/or free NH3) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m(3)d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm(3)/kg VS(fed). On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500-680 dm(3)/kg VS(fed)). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates
NASA Astrophysics Data System (ADS)
Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.
2002-03-01
A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.
Real-time volume rendering of 4D image using 3D texture mapping
NASA Astrophysics Data System (ADS)
Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-05-01
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
High-fidelity real-time maritime scene rendering
NASA Astrophysics Data System (ADS)
Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin
2011-06-01
The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.
High-quality slab-based intermixing method for fusion rendering of multiple medical objects.
Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil
2016-01-01
The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Evaluation of haptic interfaces for simulation of drill vibration in virtual temporal bone surgery.
Ghasemloonia, Ahmad; Baxandall, Shalese; Zareinia, Kourosh; Lui, Justin T; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny
2016-11-01
Surgical training is evolving from an observership model towards a new paradigm that includes virtual-reality (VR) simulation. In otolaryngology, temporal bone dissection has become intimately linked with VR simulation as the complexity of anatomy demands a high level of surgeon aptitude and confidence. While an adequate 3D visualization of the surgical site is available in current simulators, the force feedback rendered during haptic interaction does not convey vibrations. This lack of vibration rendering limits the simulation fidelity of a surgical drill such as that used in temporal bone dissection. In order to develop an immersive simulation platform capable of haptic force and vibration feedback, the efficacy of hand controllers for rendering vibration in different drilling circumstances needs to be investigated. In this study, the vibration rendering ability of four different haptic hand controllers were analyzed and compared to find the best commercial haptic hand controller. A test-rig was developed to record vibrations encountered during temporal bone dissection and a software was written to render the recorded signals without adding hardware to the system. An accelerometer mounted on the end-effector of each device recorded the rendered vibration signals. The newly recorded vibration signal was compared with the input signal in both time and frequency domains by coherence and cross correlation analyses to quantitatively measure the fidelity of these devices in terms of rendering vibrotactile drilling feedback in different drilling conditions. This method can be used to assess the vibration rendering ability in VR simulation systems and selection of ideal haptic devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
49 CFR Schedule F to Subpart B of... - Affiliate Revenue Data for Services Rendered
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 8 2010-10-01 2010-10-01 false Affiliate Revenue Data for Services Rendered F...—Affiliate Revenue Data for Services Rendered [Dollars in thousands] () Greyhound Lines, Inc. () Trailways combined () All study carriers Line No. and Item (a) Calendar year 19__ (b) Calender year 19__ (c) Base...
31 CFR 515.548 - Services rendered by Cuba to United States aircraft.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Specific licenses are issued for payment to Cuba of charges for services...
31 CFR 515.548 - Services rendered by Cuba to United States aircraft.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Specific licenses are issued for payment to Cuba of charges for services...
31 CFR 515.548 - Services rendered by Cuba to United States aircraft.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Specific licenses are issued for payment to Cuba of charges for services...
9 CFR 314.5 - Inedible rendered fats prepared at official establishments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... edible product, either with low grade offal during the rendering or by adding to, and mixing thoroughly... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Inedible rendered fats prepared at official establishments. 314.5 Section 314.5 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE...
NASA Astrophysics Data System (ADS)
Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich
2015-01-01
Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.
Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects
NASA Astrophysics Data System (ADS)
Beddiaf, Ali; Babahenini, Mohamed Chaouki
2018-03-01
Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
MTO-like reference mask modeling for advanced inverse lithography technology patterns
NASA Astrophysics Data System (ADS)
Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min
2017-07-01
Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.
RenderView: physics-based multi- and hyperspectral rendering using measured background panoramics
NASA Astrophysics Data System (ADS)
Talcott, Denise M.; Brown, Wade W.; Thomas, David J.
2003-09-01
As part of the survivability engineering process it is necessary to accurately model and visualize the vehicle signatures in multi- or hyperspectral bands of interest. The signature at a given wavelength is a function of the surface optical properties, reflection of the background and, in the thermal region, the emission of thermal radiation. Currently, it is difficult to obtain and utilize background models that are of sufficient fidelity when compared with the vehicle models. In addition, the background models create an additional layer of uncertainty in estimating the vehicles signature. Therefore, to meet exacting rendering requirements we have developed RenderView, which incorporates the full bidirectional reflectance distribution function (BRDF). Instead of using a modeled background we have incorporated a measured calibrated background panoramic image to provide the high fidelity background interaction. Uncertainty in the background signature is reduced to the error in the measurement which is considerably smaller than the uncertainty inherent in a modeled background. RenderView utilizes a number of different descriptions of the BRDF, including the Sandford-Robertson. In addition, it provides complete conservation of energy with off axis sampling. A description of RenderView will be presented along with a methodology developed for collecting background panoramics. Examples of the RenderView output and the background panoramics will be presented along with our approach to handling the solar irradiance problem.
Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework
Kroes, Thomas; Post, Frits H.; Botha, Charl P.
2012-01-01
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292
31 CFR 500.585 - Payments for services rendered by North Korea to United States aircraft authorized.
Code of Federal Regulations, 2010 CFR
2010-07-01
... North Korea to United States aircraft authorized. 500.585 Section 500.585 Money and Finance: Treasury... § 500.585 Payments for services rendered by North Korea to United States aircraft authorized. Payments to North Korea of charges for services rendered by the Government of North Korea in connection with...
26 CFR 20.6001-1 - Persons required to keep records and render statements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... declaration that it is made under penalties of perjury, of facts within his knowledge which the district... thereof. Failure to comply with such a request will render the executor liable to penalties (see section... thereof. Failure on the part of any person to comply with such request will render him liable to penalties...
26 CFR 20.6001-1 - Persons required to keep records and render statements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... declaration that it is made under penalties of perjury, of facts within his knowledge which the district... thereof. Failure to comply with such a request will render the executor liable to penalties (see section... thereof. Failure on the part of any person to comply with such request will render him liable to penalties...
26 CFR 20.6001-1 - Persons required to keep records and render statements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... declaration that it is made under penalties of perjury, of facts within his knowledge which the district... thereof. Failure to comply with such a request will render the executor liable to penalties (see section... thereof. Failure on the part of any person to comply with such request will render him liable to penalties...
26 CFR 20.6001-1 - Persons required to keep records and render statements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... declaration that it is made under penalties of perjury, of facts within his knowledge which the district... thereof. Failure to comply with such a request will render the executor liable to penalties (see section... thereof. Failure on the part of any person to comply with such request will render him liable to penalties...
26 CFR 20.6001-1 - Persons required to keep records and render statements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... declaration that it is made under penalties of perjury, of facts within his knowledge which the district... thereof. Failure to comply with such a request will render the executor liable to penalties (see section... thereof. Failure on the part of any person to comply with such request will render him liable to penalties...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayr, S., E-mail: suvi.bayr@jyu.fi; Ojanperä, M.; Kaparaju, P.
Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was unstable in mono-digestion. • Free NH{sub 3} inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55more » °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH{sub 4}-N and/or free NH{sub 3}) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m{sup 3} d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm{sup 3}/kg VS{sub fed}. On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm{sup 3}/kg VS{sub fed}). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials.« less
Integral image rendering procedure for aberration correction and size measurement.
Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion
2014-05-20
The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.
9 CFR 315.1 - Carcasses and parts passed for cooking; rendering into lard or tallow.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Carcasses and parts passed for cooking... PARTS PASSED FOR COOKING § 315.1 Carcasses and parts passed for cooking; rendering into lard or tallow. Carcasses and parts passed for cooking may be rendered into lard in accordance with § 319.702 of this...
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-05-01
We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
NASA Astrophysics Data System (ADS)
Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro
2003-05-01
This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
Remote volume rendering pipeline for mHealth applications
NASA Astrophysics Data System (ADS)
Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald
2014-03-01
We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
30. Photocopy of photograph of architectural rendering by office of ...
30. Photocopy of photograph of architectural rendering by office of Clarence H. Johnston, Sr., dated 1929; photograph in Clarence H. Johnston Papers, Northwest Architectural Archives, University of Minnesota; photographer unknown; location of rendering unknown; delineator unknown; THREE-QUARTER VIEW SHOWING WEST SIDE AND SOUTH FRONT; LOOKING NORTHEAST - Northwest Airways Hangar & Administration Building, 590 Bayfield Street, St. Paul Downtown Airport (Holman), Saint Paul, Ramsey County, MN
Archeological Testing Fort Hood: 1994-1995. Volume 2
1996-10-01
Type 3 sediment appears to be dry present, both as discrete lenses which are usually decomposition, which renders it a loose, grayish readily...degrading the quality of the shelters, rendering them increasingly attractive for resource. habitation. However, as noted previously (Abbott 1994; Abbott...651 characteristic renders them subject to additional federal laws (e.g., NAGPRA), it increases the urgency to implement management policies that will
Method of producing hydrogen, and rendering a contaminated biomass inert
Bingham, Dennis N [Idaho Falls, ID; Klingler, Kerry M [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID
2010-02-23
A method for rendering a contaminated biomass inert includes providing a first composition, providing a second composition, reacting the first and second compositions together to form an alkaline hydroxide, providing a contaminated biomass feedstock and reacting the alkaline hydroxide with the contaminated biomass feedstock to render the contaminated biomass feedstock inert and further producing hydrogen gas, and a byproduct that includes the first composition.
Space Object and Light Attribute Rendering (SOLAR) Projection System
2017-05-08
AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT A state of the art planetarium style projection system...Rendering (SOLAR) Projection System 1 Abstract A state of the art planetarium style projection system called Space Object and Light Attribute Rendering...planetarium style projection system for emulation of a variety of close proximity and long range imaging experiments. University at Buffalo’s Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, James P; Patchett, John M; Lo, Li - Ta
2011-01-24
This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. Formore » terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU-based rendering solutions when it is appropriate. For example, on remote supercomputers CPU-based rendering can offer a means of viewing data without having to offload the data or geometry onto a CPU-based visualization system. In terms of comparative performance of the CPU and CPU we believe that further optimizations of the performance of both CPU or CPU-based rendering are possible. The simulation community is currently confronting this reality as they work to port their simulations to different hardware architectures. What is interesting about CPU rendering of massive datasets is that for part two decades CPU performance has significantly outperformed CPU-based systems. Based on our advancements, evaluations and explorations we believe that CPU-based rendering has returned as one viable option for the visualization of massive datasets.« less
NASA Astrophysics Data System (ADS)
Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.
2003-08-01
In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.
Styszko, Katarzyna; Kupiec, Krzysztof
2016-10-01
In this study the diffusion coefficients of isoproturon, diuron and cybutryn in acrylate and silicone resin-based renders were determined. The diffusion coefficients were determined using measuring concentrations of biocides in the liquid phase after being in contact with renders for specific time intervals. The mathematical solution of the transient diffusion equation for an infinite plate contacted on one side with a limited volume of water was used to calculate the diffusion coefficient. The diffusion coefficients through the acrylate render were 8.10·10(-9) m(2) s(-1) for isoproturon, 1.96·10(-9) m(2) s(-1) for diuron and 1.53·10(-9) m(2) s(-1) for cybutryn. The results for the silicone render were lower by one order of magnitude. The compounds with a high diffusion coefficient for one polymer had likewise high values for the other polymer. Copyright © 2016 Elsevier Ltd. All rights reserved.
From the Rendering Equation to Stratified Light Transport Inversion
2010-12-09
iteratively. These approaches relate closely to the radiosity method for diffuse global illumination in forward rendering (Hanrahan et al, 1991; Gortler et...currently simply use sparse matrices to represent T, we are also interested in exploring connections with hierar- chical and wavelet radiosity as in...Seidel iterative methods used in radiosity . 2.4 Inverse Light Transport Previous work on inverse rendering has considered inversion of the direct
A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories
1989-02-01
frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a
Superoxide and Nitric Oxide Mechanisms in Traumatic Brain Injury and Hemorrhagic Hypotension.
1999-12-01
DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 Words) Traumatic brain injury (TBI) renders the brain vulnerable to secondary ischemia and poor outcome...cerebral blood flow (CBF) and renders the brain vulnerable to secondary ischemia. There is clinical evidence that hypotension contributes to poor...without TBI. These data indicate that even moderate TBI renders the brain sensitive to ischemic injury during relative mild levels of hypotension that
Data Envelopment Analysis to Assess Productivity in the United States Air Force Medical Supply Chain
2011-06-01
represented as the numerator in the following formula, rendering a “utilization” percentage ( Heizer & Render , 2008): Actual Output...when “it” is needed. ( Heizer & Render , 2008). While operating costs are certainly a deciding factor, the overarching importance of these decisions...designed to do at its maximum and what it actually does encouraged further review of this SLEF ( Heizer & Bender, 2008). The difference between
Defining Acquisition Related Terms
1993-09-01
34 National Contract Man- agement Journal, 23: 25-32 (1990) 34. Heizer , Jay, Barry Render and Ralph M. Stair, Jr. Production and Operations Methods...551. The FAR definition(s) furnish a useful explanation of "service contracl(s)" and give a comprehensive description of the types of services rendered ...other fa- cilities charge to be paid by the Government will be reduced or eliminated. (3) The utility service supplier refuses to render the desired
The Border Star 85 Survey: Toward an Archeology of Landscapes
1988-12-12
historic properties on that highly active military tire TRU method as implemented) were inadequate for installation. rendering determinations of National...Dofia Ana phase settlement, such required only minimal reporting sufficient to render Na- that one could speculate as to how and why variation among...this dependent upon precipitation. In normal or high rainfall sort are complicated, however, by factors that render them years there would be many
1988-10-01
autoregulation, render the cerebral circulation dependent upon systemic circulation exposing brain to ischernic damage or edema in shock or stress...Thus, sharp reductions of arterial pressure, as might occur in hemorrhagic or traumatic shock, will render the cerebral circulation vulnerable to...autoregulated range, rendering local areas of the brain vulnerable to cerebral edema and breakdown of the blood brain barrier. -2- 8. Cerebral blood
IceT users' guide and reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
2011-01-01
The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.
Combined approach of shell and shear-warp rendering for efficient volume visualization
NASA Astrophysics Data System (ADS)
Falcao, Alexandre X.; Rocha, Leonardo M.; Udupa, Jayaram K.
2003-05-01
In Medical Imaging, shell rendering (SR) and shear-warp rendering (SWR) are two ultra-fast and effective methods for volume visualization. We have previously shown that, typically, SWR can be on the average 1.38 times faster than SR, but it requires from 2 to 8 times more memory space than SR. In this paper, we propose an extension of the compact shell data structure utilized in SR to allow shear-warp factorization of the viewing matrix in order to obtain speed up gains for SR, without paying the high storage price of SWR. The new approach is called shear-warp shell rendering (SWSR). The paper describes the methods, points out their major differences in the computational aspects, and presents a comparative analysis of them in terms of speed, storage, and image quality. The experiments involve hard and fuzzy boundaries of 10 different objects of various sizes, shapes, and topologies, rendered on a 1GHz Pentium-III PC with 512MB RAM, utilizing surface and volume rendering strategies. The results indicate that SWSR offers the best speed and storage characteristics compromise among these methods. We also show that SWSR improves the rendition quality over SR, and provides renditions similar to those produced by SWR.
LED Light Characteristics for Surgical Shadowless Lamps and Surgical Loupes
Kinugawa, Yoshitaka; Nobae, Yuichi; Suzuki, Toshihiro; Tanaka, Yoshiyuki; Toda, Ikuko; Tsubota, Kazuo
2015-01-01
Background: Blue light has more energy than longer wavelength light and can penetrate the eye to reach the retina. When surgeons use magnifying loupes under intensive surgical shadowless lamps for better view of the surgical field, the total luminance is about 200 times brighter than that of typical office lighting. In this study, the effects of 2 types of shadowless lamps were compared. Moreover, the effect of various eyeglasses, which support magnifying loupes, on both the light energy and color rendering was considered. Methods: The light intensity and color rendering were measured on 3 variables: light transmittance, light intensity, and color rendering. Results: Under shadowless lamps, the light energy increased with low-magnification loupes and decreased with high-magnification loupes. Filtering eyeglasses reduced the energy, especially in conditions where the low-magnification loupe was used. The best color-rendering index values were obtained with computer eyeglasses under conventional light-emitting diode shadowless lamps and with no glass and with lightly yellow-tinted lenses under less-blue light-emitting diode. Conclusions: Microsurgeons are exposed to strong lighting throughout their career, and proper color rendering must be considered for easier recognition. Light toxicity and loss of color rendering can be reduced with an appropriate combination of shadowless lamps and colored eyeglasses. PMID:26893987
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
NASA Astrophysics Data System (ADS)
Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo
2011-09-01
Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.
Vermeirssen, Etiënne L M; Campiche, Sophie; Dietschweiler, Conrad; Werner, Inge; Burkhardt, Michael
2018-05-22
To protect house façades from fouling by microorganisms, biocides can be added to a render or paint before it is applied. During driving rain events, these biocides gradually leach out and have the potential to pollute soil or aquatic ecosystems. We studied the leaching behaviour of biocides and toxicity of leachates from renders with either free or encapsulated biocides. Both render types contained equal amounts of terbutryn, 2-octyl-3(2H)-isothiazolinone (OIT) and 4,5-dichloro-2-n-octyl-4-isothiazolino-3-one (DCOIT). Leachate samples were generated over nine immersion cycles according to a European standard and biocides were quantified. The first and ninth samples were tested using bioassays with algae, bacteria and water flea, the first sample with earthworms and springtails. Encapsulation reduced leaching of terbutryn, OIT and DCOIT four-, 17-, and 27-fold. For aquatic organisms, the toxicity of water from render containing encapsulated biocides was always lower than that of render with free biocides. Furthermore, toxicity decreased four- to five-fold over the nine immersion cycles. Inhibition of photosynthesis was the most sensitive endpoint, followed by algal growth rate, bacterial bioluminescence and water flea reproduction. Toxicity to algae was explained by terbutryn and toxicity to bacteria by OIT. None of the samples affected soil organisms. Results demonstrate that combining standardised leaching tests with standardised bioassays is a promising approach to evaluate the ecotoxicity of biocides that leach from façade renders. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Seemann, M D; Claussen, C D
2001-06-01
A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-02-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-01-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399
Development of Anthropometric Analogous Headforms. Phase 1.
1994-10-31
shown in figure 5. This surface mesh can then be transformed into polygon faces that are able to be rendered by the AutoCAD rendering tools . Rendering of...computer-generated surfaces. The material removal techniques require the programming of the tool path of the cutter and in some cases requires specialized... tooling . Tool path programs are available to transfer the computer-generated surface into actual paths of the cutting tool . In cases where the
The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards
NASA Astrophysics Data System (ADS)
Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.
2015-09-01
The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.
A kinesthetic washout filter for force-feedback rendering.
Danieau, Fabien; Lecuyer, Anatole; Guillotel, Philippe; Fleureau, Julien; Mollet, Nicolas; Christie, Marc
2015-01-01
Today haptic feedback can be designed and associated to audiovisual content (haptic-audiovisuals or HAV). Although there are multiple means to create individual haptic effects, the issue of how to properly adapt such effects on force-feedback devices has not been addressed and is mostly a manual endeavor. We propose a new approach for the haptic rendering of HAV, based on a washout filter for force-feedback devices. A body model and an inverse kinematics algorithm simulate the user's kinesthetic perception. Then, the haptic rendering is adapted in order to handle transitions between haptic effects and to optimize the amplitude of effects regarding the device capabilities. Results of a user study show that this new haptic rendering can successfully improve the HAV experience.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr
2005-09-01
We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.
Gong, C; Jiang, X; Wang, J
2017-10-01
Workers' boots are considered one of the re-contamination routes of Salmonella for rendered meals in the rendering-processing environment. This study was conducted to evaluate the efficacy of a bacteriophage cocktail for reducing Salmonella on workers' boots and ultimately for preventing Salmonella re-contamination of rendered meals. Under laboratory conditions, biofilms of Salmonella Typhimurium avirulent strain 8243 formed on rubber templates or boots were treated with a bacteriophage cocktail of 6 strains (ca. 9 log PFU/mL) for 6 h at room temperature. Bacteriophage treatments combined with sodium hypochlorite (400 ppm) or 30-second brush scrubbing also were investigated for a synergistic effect on reducing Salmonella biofilms. Sodium magnesium (SM) buffer and sodium hypochlorite (400 ppm) were used as controls. To reduce indigenous Salmonella on workers' boots, a field study was conducted to apply a bacteriophage cocktail and other combined treatments 3 times within one wk in a rendering-processing environment. Prior to and after bacteriophage treatments, Salmonella populations on the soles of rubber boots were swabbed and enumerated on XLT-4, Miller-Mallinson or CHROMagar™ plates. Under laboratory conditions, Salmonella biofilms formed on rubber templates and boots were reduced by 95.1 to 99.999% and 91.5 to 99.2%, respectively. In a rendering-processing environment (ave. temperature: 19.3°C; ave. relative humidity: 48%), indigenous Salmonella populations on workers' boots were reduced by 84.2, 92.9, and 93.2% after being treated with bacteriophages alone, bacteriophages + sodium hypochlorite, and bacteriophages + scrubbing for one wk, respectively. Our results demonstrated the effectiveness of bacteriophage treatments in reducing Salmonella contamination on the boots in both laboratory and the rendering-processing environment. © 2017 Poultry Science Association Inc.
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
Three-dimensional rendering in medicine: some common misconceptions
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.
2001-05-01
As seen in the medical imaging literature and in the poster presentations at the annual conference of the Radiological Society of North America during the past 10 years, several mis conceptions are held relating to 3D rendering of medical images. The purpose of this presentation is to illustrate and clarify these with medical examples. Most of the misconceptions have to do with a mix up of the issues related to the common visualization techniques, viz., surface rendering (SR) and volume rendering (VR), and methods of image segmentation. In our survey, we came across the following most commonly held conceptions which we believe (and shall demonstrate) are not correct: (1) SR equated to thresholding. (2) VR considered not requiring segmentation. (3) VR considered to achieve higher resolution than SR. (4) SR/VR considered to require specialized hardware to achieve adequate speed. We shall briefly define and establish some fundamental terms to obviate any potential for terminology-related misconceptions. Subsequently, we shall sort out these issues and illustrate with examples as to why the above conceptions are incorrect. There are many SR methods that use segmentations that are far superior to thresholding. All VR techniques (except the straightforward MIP) require some form of fuzzy object specification, that is, fuzzy segmentation. The details seen in renditions depend fundamentally on, in addition to the rendering method, segmentation techniques also. There are fast-software-based rendering methods that give a performance on PCs similar to or exceeding that of expensive hardware systems. Most of the difficulties encountered in visualization (and also in image processing and analysis) stem from the difficulties in segmentation. It is important to identify these and separate them from the issues related purely to 3D rendering.
Headquarters Air Force Material Command Customer Relationship Study
2006-03-01
principles states, there are a “critical few and trivial many ( Heizer and Render , 2004:453).” The idea is to draw attention to the critical few or large...low frequency. They represent about 55% of the customer base and only 5% of the annual frequency ( Heizer and Render , 2004:453). The 66...customer,” AIIM E- Doc Magazine; July/August 2003,17,4. Heizer , Jay and Barry Render . “Principles of Operations Management,” 5th edition, New
Maintenance Enterprise Resource Planning: Information Value Among Supply Chain Elements
2014-04-30
is the Economic Order Cost (EOQ) model, Production Order Quantity Cost, and Quantity Discount Model( Heizer & Render , 2007, pp. 489–490...demand for another item. Following an aircraft, the items to assemble the aircraft are dependent demand ( Heizer & Render , 2007, pp. 562–563). MERP...6), 947–950. doi:10.1287/opre.38.6.947 Heizer , J., & Render , B. (2007). Principles of Operations Management (7th ed., p. 684). Upper Saddle River
Draft Environmental Impact Report/Environmental Impact Statement, Bel Marin Keys Unit 5.
1982-09-01
generation render it a major indirect source of emissions. The 1979 Bay Area Air Quality Plan contains actions and policies designed to result in the...Base would render not only the immediate environs unacceptable in terms of housing but large portions of Novato as well. The Noise Element of the...of toxic or other deleterious effects on aquatic biota, wildlife or waterfowl, or which render any of these unfit for human consumption either at
Green Infrastructure Checklists and Renderings
Materials and checklists for Denver, CO to review development project plans for green infrastructure components, best practices for inspecting and maintaining installed green infrastructure. Also includes renderings of streetscape projects.
An Incremental Weighted Least Squares Approach to Surface Lights Fields
NASA Astrophysics Data System (ADS)
Coombe, Greg; Lastra, Anselmo
An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Virtual acoustic environments for comprehensive evaluation of model-based hearing devices.
Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker
2018-06-01
Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.
Parametric model of the scala tympani for haptic-rendered cochlear implantation.
Todd, Catherine; Naghdy, Fazel
2005-01-01
A parametric model of the human scala tympani has been designed for use in a haptic-rendered computer simulation of cochlear implant surgery. It will be the first surgical simulator of this kind. A geometric model of the Scala Tympani has been derived from measured data for this purpose. The model is compared with two existing descriptions of the cochlear spiral. A first approximation of the basilar membrane is also produced. The structures are imported into a force-rendering software application for system development.
Direct volumetric rendering based on point primitives in OpenGL.
da Rosa, André Luiz Miranda; de Almeida Souza, Ilana; Yuuji Hira, Adilson; Zuffo, Marcelo Knörich
2006-01-01
The aim of this project is to present a renderization by software algorithm of acquired volumetric data. The algorithm was implemented in Java language and the LWJGL graphical library was used, allowing the volume renderization by software and thus preventing the necessity to acquire specific graphical boards for the 3D reconstruction. The considered algorithm creates a model in OpenGL, through point primitives, where each voxel becomes a point with the color values related to this pixel position in the corresponding images.
The Cerrito Site (AR-4): A Piedra Lumbre Phase Settlement at Abiquiu Reservoir,
1979-11-01
entirely or partially with well-polished slip and * which were not painted. Diffeential firing and smudging techniques render the vessels red, black, gray...collapsed. The single carbon-14 date is rendered suspicious because of the Suess effect (Samon et al. 1974). However, the influence of this effect in the...dates of 0 A.D. 1107 and A.D. 792. The 2 oldest readings from the lithic areas (specimens 5621 and 5632) are rendered suspicious when computed with the
1996-03-01
VII-7 VIII-1 Computer generated rendering of flood detention dam ................ VIII-3 VIII-2 American River Watershed Project Schedule...shows a plan view of the dam and plate 19 shows the dam in section and profile. Figure VIII-1 is a computer generated rendering of the dam. Table VIH-1...Williamson Act render the land ineligible for continued protection under that law, the local sponsor would be responsible for compensating the landowners
2015-03-26
universal definition” (Evans & Lindsay, 1996). Heizer and Render (2010) argue that several definitions of this term are user-based, meaning, that quality...for example, really good ice cream has high butterfat levels.” ( Heizer & Render , 2010). Garvin, in his Competing in Eight Dimensions of Quality...Montgomery, 2005). As for definition purposes, the concept adopted by this research was provided by Heizer and Render (2010), for whom Statistical Process
Application of volume rendering technique (VRT) for musculoskeletal imaging.
Darecki, Rafał
2002-10-30
A review of the applications of volume rendering technique in musculoskeletal three-dimensional imaging from CT data. General features, potential and indications for applying the method are presented.
40 CFR 164.91 - Accelerated decision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... decision. (a) General. The Administrative Law Judge, in his discretion, may at any time render an... matter of law; or (8) Such other and further reasons as are just. (b) Effect. A decision rendered under...
Experiencing "Macbeth": From Text Rendering to Multicultural Performance.
ERIC Educational Resources Information Center
Reisin, Gail
1993-01-01
Shows how one teacher used innovative methods in teaching William Shakespeare's "Macbeth." Outlines student assignments including text renderings, rewriting a scene from the play, and creating a multicultural scrapbook for the play. (HB)
1. PHOTOCOPY OF RENDERING OF PSFS BUILDING BY D.E. SUTTON. ...
1. PHOTOCOPY OF RENDERING OF PSFS BUILDING BY D.E. SUTTON. Date possibly 1929 or 1930, when construction started. - Philadelphia Saving Fund Society, Twelfth & Market Streets, Philadelphia, Philadelphia County, PA
A data distributed parallel algorithm for ray-traced volume rendering
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.
1993-01-01
This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Standardized rendering from IR surveillance motion imagery
NASA Astrophysics Data System (ADS)
Prokoski, F. J.
2014-06-01
Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.
Herrera, Marcos
2010-08-01
The Freudian expression Vorstellungsrepräsentanz (Freud, 1915b, 1915c), which is rendered in the Standard Edition as ideational representative, is commonly translated in Spanish as representante-representativo and in French as représentant-représentation, among other renderings. An interdisciplinary conceptual inquiry, which applies linguistic semantics to the evaluation of the available Spanish and French renderings, concludes that this compound expression should be translated in these languages as representante ideativo and représentant idéatif, respectively, renderings which happen to correspond to Strachey's translation into English in the SE. In contrast to most Spanish and French translations, this proposal conforms to the semantic principle of compositionality. On the one hand, it provides a suitable translation of the two parts of the compound. Thus it renders Vorstellung as idea, with the classical meaning of image or mental representation, which can be traced back to Hume's empiricist philosophy, and it renders Repräsentanz as representative, with the meaning of delegate. On the other hand, its linguistic form preserves the attributive meaning relationship which exists between both concepts in the original German expression. Against the background of these semantic considerations, a theoretical question concerning Freudian metapsychology is discussed: the drive has a psychic representative, but is there a (mental) representation of the drive? Copyright © 2010 Institute of Psychoanalysis.
Architecture for high performance stereoscopic game rendering on Android
NASA Astrophysics Data System (ADS)
Flack, Julien; Sanderson, Hugh; Shetty, Sampath
2014-03-01
Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.
An experiment on the color rendering of different light sources
NASA Astrophysics Data System (ADS)
Fumagalli, Simonetta; Bonanomi, Cristian; Rizzi, Alessandro
2013-02-01
The color rendering index (CRI) of a light source attempts to measure how much the color appearance of objects is preserved when they are illuminated by the given light source. This problem is of great importance for various industrial and scientific fields, such as lighting architecture, design, ergonomics, etc. Usually a light source is specified through the Correlated Color Temperature or CCT. However two (or more) light sources with the same CCT but different spectral power distribution can exist. Therefore color samples viewed under two light sources with equal CCTs can appear different. Hence, the need for a method to assess the quality of a given illuminant in relation to color. Recently CRI has had a renewed interest because of the new LED-based lighting systems. They usually have a color rendering index rather low, but good preservation of color appearance and a pleasant visual appearance (visual appeal). Various attempts to develop a new color rendering index have been done so far, but still research is working for a better one. This article describes an experiment performed by human observers concerning the appearance preservation of color under some light sources, comparing it with a range of available color rendering indices.
Color rendering indices in global illumination methods
NASA Astrophysics Data System (ADS)
Geisler-Moroder, David; Dür, Arne
2009-02-01
Human perception of material colors depends heavily on the nature of the light sources used for illumination. One and the same object can cause highly different color impressions when lit by a vapor lamp or by daylight, respectively. Based on state-of-the-art colorimetric methods we present a modern approach for calculating color rendering indices (CRI), which were defined by the International Commission on Illumination (CIE) to characterize color reproduction properties of illuminants. We update the standard CIE method in three main points: firstly, we use the CIELAB color space, secondly, we apply a Bradford transformation for chromatic adaptation, and finally, we evaluate color differences using the CIEDE2000 total color difference formula. Moreover, within a real-world scene, light incident on a measurement surface is composed of a direct and an indirect part. Neumann and Schanda1 have shown for the cube model that interreflections can influence the CRI of an illuminant. We analyze how color rendering indices vary in a real-world scene with mixed direct and indirect illumination and recommend the usage of a spectral rendering engine instead of an RGB based renderer for reasons of accuracy of CRI calculations.
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
Salisbury, C M; Gillespie, R B; Tan, H Z; Barbagli, F; Salisbury, J K
2011-01-01
In this paper, we extend the concept of the contrast sensitivity function - used to evaluate video projectors - to the evaluation of haptic devices. We propose using human observers to determine if vibrations rendered using a given haptic device are accompanied by artifacts detectable to humans. This determination produces a performance measure that carries particular relevance to applications involving texture rendering. For cases in which a device produces detectable artifacts, we have developed a protocol that localizes deficiencies in device design and/or hardware implementation. In this paper, we present results from human vibration detection experiments carried out using three commercial haptic devices and one high performance voice coil motor. We found that all three commercial devices produced perceptible artifacts when rendering vibrations near human detection thresholds. Our protocol allowed us to pinpoint the deficiencies, however, and we were able to show that minor modifications to the haptic hardware were sufficient to make these devices well suited for rendering vibrations, and by extension, the vibratory components of textures. We generalize our findings to provide quantitative design guidelines that ensure the ability of haptic devices to proficiently render the vibratory components of textures.
Enhanced visualization of MR angiogram with modified MIP and 3D image fusion
NASA Astrophysics Data System (ADS)
Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.
1997-05-01
We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.
NASA Technical Reports Server (NTRS)
Mehling, Joshua S.; Holley, James; O'Malley, Marcia K.
2015-01-01
The fidelity with which series elastic actuators (SEAs) render desired impedances is important. Numerous approaches to SEA impedance control have been developed under the premise that high-precision actuator torque control is a prerequisite. Indeed, the design of an inner torque compensator has a significant impact on actuator impedance rendering. The disturbance observer (DOB) based torque control implemented in NASA's Valkyrie robot is considered here and a mathematical model of this torque control, cascaded with an outer impedance compensator, is constructed. While previous work has examined the impact a disturbance observer has on torque control performance, little has been done regarding DOBs and impedance rendering accuracy. Both simulation and a series of experiments are used to demonstrate the significant improvements possible in an SEA's ability to render desired dynamic behaviors when utilizing a DOB. Actuator transparency at low impedances is improved, closed loop hysteresis is reduced, and the actuator's dynamic response to both commands and interaction torques more faithfully matches that of the desired model. All of this is achieved by leveraging DOB based control rather than increasing compensator gains, thus making improved SEA impedance control easier to achieve in practice.
Enhanced Graphics for Extended Scale Range
NASA Technical Reports Server (NTRS)
Hanson, Andrew J.; Chi-Wing Fu, Philip
2012-01-01
Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.
11. Photographic copy of architect's rendering, from the pencil tracings ...
11. Photographic copy of architect's rendering, from the pencil tracings in the possession of Potter, Lawson and Pawlowsky, WISCONSIN AVENUE ELEVATION - Manchester's Department Store Building, 2 East Mifflin Street, Madison, Dane County, WI
10. Photographic copy of architect's rendering, from the pencil tracings ...
10. Photographic copy of architect's rendering, from the pencil tracings in the possession of Potter, Lawson and Pawlowsky, MIFFLIN STREET ELEVATION - Manchester's Department Store Building, 2 East Mifflin Street, Madison, Dane County, WI
12. Photographic copy of architect's rendering, from the pencil tracings ...
12. Photographic copy of architect's rendering, from the pencil tracings in the possession of Potter, Lawson and Pawlowsky, TYPICAL FLOOR PLAN - Manchester's Department Store Building, 2 East Mifflin Street, Madison, Dane County, WI
21 CFR 173.275 - Hydrogenated sperm oil.
Code of Federal Regulations, 2014 CFR
2014-04-01
... from rendering the fatty tissue of the sperm whale or is prepared by synthesis of fatty acids and fatty alcohols derived from the sperm whale. The sperm oil obtained by rendering is refined. The oil is...
A Graph Based Interface for Representing Volume Visualization Results
NASA Technical Reports Server (NTRS)
Patten, James M.; Ma, Kwan-Liu
1998-01-01
This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.
Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.
Friston, Sebastian; Steed, Anthony; Tilbury, Simon; Gaydadjiev, Georgi
2016-04-01
Latency - the delay between a user's action and the response to this action - is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space - but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of ~1 ms from 'tracker to pixel'. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of ~1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system - one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours.
Scalable Multi-Platform Distribution of Spatial 3d Contents
NASA Astrophysics Data System (ADS)
Klimke, J.; Hagedorn, B.; Döllner, J.
2013-09-01
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
49 CFR Schedule F to Subpart B of... - Affiliate Revenue Data for Services Rendered
Code of Federal Regulations, 2011 CFR
2011-10-01
...—Affiliate Revenue Data for Services Rendered [Dollars in thousands] () Greyhound Lines, Inc. () Trailways... shall be prepared for each of the following: 1. Greyhound Lines, Inc. 2. Trailways Combined (study...
49 CFR Schedule F to Subpart B of... - Affiliate Revenue Data for Services Rendered
Code of Federal Regulations, 2012 CFR
2012-10-01
...—Affiliate Revenue Data for Services Rendered [Dollars in thousands] () Greyhound Lines, Inc. () Trailways... shall be prepared for each of the following: 1. Greyhound Lines, Inc. 2. Trailways Combined (study...
49 CFR Schedule F to Subpart B of... - Affiliate Revenue Data for Services Rendered
Code of Federal Regulations, 2013 CFR
2013-10-01
...—Affiliate Revenue Data for Services Rendered [Dollars in thousands] () Greyhound Lines, Inc. () Trailways... shall be prepared for each of the following: 1. Greyhound Lines, Inc. 2. Trailways Combined (study...
49 CFR Schedule F to Subpart B of... - Affiliate Revenue Data for Services Rendered
Code of Federal Regulations, 2014 CFR
2014-10-01
...—Affiliate Revenue Data for Services Rendered [Dollars in thousands] () Greyhound Lines, Inc. () Trailways... shall be prepared for each of the following: 1. Greyhound Lines, Inc. 2. Trailways Combined (study...
9. Photographic copy of architect's rendering, from the pencil tracings ...
9. Photographic copy of architect's rendering, from the pencil tracings in the possession of Potter, Lawson and Pawlowsky, ELEVATIONS AND TWO SECTIONS - Manchester's Department Store Building, 2 East Mifflin Street, Madison, Dane County, WI
Bradetich, Ryan; Dearien, Jason A; Grussling, Barry Jakob; Remaley, Gavin
2013-11-05
The present disclosure provides systems and methods for remote device management. According to various embodiments, a local intelligent electronic device (IED) may be in communication with a remote IED via a limited bandwidth communication link, such as a serial link. The limited bandwidth communication link may not support traditional remote management interfaces. According to one embodiment, a local IED may present an operator with a management interface for a remote IED by rendering locally stored templates. The local IED may render the locally stored templates using sparse data obtained from the remote IED. According to various embodiments, the management interface may be a web client interface and/or an HTML interface. The bandwidth required to present a remote management interface may be significantly reduced by rendering locally stored templates rather than requesting an entire management interface from the remote IED. According to various embodiments, an IED may comprise an encryption transceiver.
Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared
NASA Technical Reports Server (NTRS)
Smith, J. A.; Ballard, J. R., Jr.
1999-01-01
We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.
NASA Technical Reports Server (NTRS)
Apodaca, Tony; Porter, Tom
1989-01-01
The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.
A concept of volume rendering guided search process to analyze medical data set.
Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro
2008-03-01
This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos
2016-04-01
This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.
NASA Astrophysics Data System (ADS)
Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong
2006-03-01
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.
2015-01-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349
Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K
2015-08-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.
38. Photocopy of ink and wash rendering by N. G. ...
38. Photocopy of ink and wash rendering by N. G. Starkwether in collection of Mr. & Mrs. Richard T. Pratt, Camden REAR ELEVATION OF W. C. PRATT'S COUNTRY SEAT - Camden, Rappahannock River, Port Royal, Caroline County, VA
CubeSat Artist Rendering and NASA M-Cubed/COVE
2012-02-14
The image on the left is an artist rendering of Montana State University Explorer 1 CubeSat; at right is a CubeSat created by the University of Michigan designated the Michigan Mulitpurpose Mini-satellite, or M-Cubed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS MEAT... derived for a renderer which does not cure cattle hide. If a renderer does cure cattle hide, the following...
2011-03-24
four phases represent an easy way to implement continuous improvement activities. Figure 3. PDCA Cycle ( Heizer and Render , 2006...Environment: a guide to sustainable product development”, McGraw-Hill, 2nd Edition, 2009. 78 Heizer , Jay and Barry Render (2006), “Principles of
2002-01-01
wrappers to other widely used languages, namely TCL/TK, Java, and Python . VTK is very powerful and covers polygonal models and image processing classes and...follows: � Large Data Visualization and Rendering � Information Visualization for Beginners � Rendering and Visualization in Parallel Environments
37. Photocopy of ink and wash rendering by N. G. ...
37. Photocopy of ink and wash rendering by N. G. Starkwether in collection of Mr. & Mrs. Richard T. Pratt, Camden SIDE ELEVATION OF ITALIAN VILLA FOR W. C. PRATT, ESQr - Camden, Rappahannock River, Port Royal, Caroline County, VA
Initial Spare Parts of the A400M Aircraft
2012-03-08
inventory. Therefore, a balance has to be sought between inventory cost and customer service ( Heizer & Render , 2010:500-501). Nevertheless, spare part... Heizer , Jay H. and Barry Render . Principles of Operations Management. Boston: Pearson Education, 2011. Heuninckx, Baudouin. “Availability
2011-07-01
rendering of a subject using 316,691 polygon faces and 161,951 points. The small white dots on the surface of the subject are landmark points. The...Figure 17: CAESAR Data. The leftmost image is a color polygon rendering of a subject using 316,691 polygon faces and 161,951 points. The small white...polygon rendering of a subject using 316,691 polygon faces and 161,951 points. The small white dots on the surface of the subject are landmark points
Characterizing Salmonella Contamination in Two Rendering Processing Plants.
Gong, Chao; Jiang, Xiuping
2017-02-01
A microbiological investigation on Salmonella contamination was conducted in two U.S. rendering plants to investigate the potential cross-contamination of Salmonella in the rendering processing environment. Sampling locations were predetermined at the areas where Salmonella contamination may potentially occur, including raw materials receiving, crax (rendered materials before grinding process) grinding, and finished meal loading-out areas. Salmonella was either enumerated directly on xylose lysine Tergitol 4 agar plates or enriched in Rappaport-Vassiliadis and tetrathionate broths. The presumptive Salmonella isolates were confirmed using CHROMagar plating and latex agglutination testing and then characterized using pulsed-field gel electrophoresis, serotyping, and biofilm-forming determination. Among 108 samples analyzed, 79 (73%) samples were Salmonella positive after enrichment. Selected Salmonella isolates (n = 65) were assigned to 31 unique pulsed-field gel electrophoresis patterns, with 16 Salmonella serotypes, including Typhimurium and Mbandaka, identified as predominant serotypes and 10 Salmonella strains determined as strong biofilm formers. Our results indicated that the raw materials receiving area was the primary source of Salmonella and that the surfaces surrounding crax grinding and finished meal loading-out areas harbor Salmonella in biofilms that may recontaminate the finished meals. The same Salmonella serotypes found in both raw materials receiving and the finished meal loading-out areas suggested a potential of cross-contamination between different areas in the rendering processing environment.
Color-rendering indices in global illumination methods
NASA Astrophysics Data System (ADS)
Geisler-Moroder, David; Dür, Arne
2009-10-01
Human perception of material colors depends heavily on the nature of the light sources that are used for illumination. One and the same object can cause highly different color impressions when lit by a vapor lamp or by daylight, respectively. On the basis of state-of-the-art colorimetric methods, we present a modern approach for the calculation of color-rendering indices (CRI), which were defined by the International Commission on Illumination (CIE) to characterize color reproduction properties of illuminants. We update the standard CIE method in three main points: first, we use the CIELAB color space; second, we apply a linearized Bradford transformation for chromatic adaptation; and finally, we evaluate color differences using the CIEDE2000 total color difference formula. Moreover, within a real-world scene, light incident on a measurement surface is composed of a direct and an indirect part. Neumann and Schanda [Proc. CGIV'06 Conf., Leeds, UK, pp. 283-286 (2006)] have shown for the cube model that diffuse interreflections can influence the CRI of a light source. We analyze how color-rendering indices vary in a real-world scene with mixed direct and indirect illumination and recommend the usage of a spectral rendering engine instead of an RGB-based renderer for reasons of accuracy of CRI calculations.
A two-metric proposal to specify the color-rendering properties of light sources for retail lighting
NASA Astrophysics Data System (ADS)
Freyssinier, Jean Paul; Rea, Mark
2010-08-01
Lighting plays an important role in supporting retail operations, from attracting customers, to enabling the evaluation of merchandise, to facilitating the completion of the sale. Lighting also contributes to the identity, comfort, and visual quality of a retail store. With the increasing availability and quality of white LEDs, retail lighting specifiers are now considering LED lighting in stores. The color rendering of light sources is a key factor in supporting retail lighting goals and thus influences a light source's acceptance by users and specifiers. However, there is limited information on what consumers' color preferences are, and metrics used to describe the color properties of light sources often are equivocal and fail to predict preference. The color rendering of light sources is described in the industry solely by the color rendering index (CRI), which is only indirectly related to human perception. CRI is intended to characterize the appearance of objects illuminated by the source and is increasingly being challenged because new sources are being developed with increasingly exotic spectral power distributions. This paper discusses how CRI might be augmented to better use it in support of the design objectives for retail merchandising. The proposed guidelines include the use of gamut area index as a complementary metric to CRI for assuring good color rendering.
Rapid Decimation for Direct Volume Rendering
NASA Technical Reports Server (NTRS)
Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane
1997-01-01
An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.
Plenoptic layer-based modeling for image based rendering.
Pearson, James; Brookes, Mike; Dragotti, Pier Luigi
2013-09-01
Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.
ProteinShader: illustrative rendering of macromolecules
Weber, Joseph R
2009-01-01
Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660
Identification of Vibrotactile Patterns Encoding Obstacle Distance Information.
Kim, Yeongmi; Harders, Matthias; Gassert, Roger
2015-01-01
Delivering distance information of nearby obstacles from sensors embedded in a white cane-in addition to the intrinsic mechanical feedback from the cane-can aid the visually impaired in ambulating independently. Haptics is a common modality for conveying such information to cane users, typically in the form of vibrotactile signals. In this context, we investigated the effect of tactile rendering methods, tactile feedback configurations and directions of tactile flow on the identification of obstacle distance. Three tactile rendering methods with temporal variation only, spatio-temporal variation and spatial/temporal/intensity variation were investigated for two vibration feedback configurations. Results showed a significant interaction between tactile rendering method and feedback configuration. Spatio-temporal variation generally resulted in high correct identification rates for both feedback configurations. In the case of the four-finger vibration, tactile rendering with spatial/temporal/intensity variation also resulted in high distance identification rate. Further, participants expressed their preference for the four-finger vibration over the single-finger vibration in a survey. Both preferred rendering methods with spatio-temporal variation and spatial/temporal/intensity variation for the four-finger vibration could convey obstacle distance information with low workload. Overall, the presented findings provide valuable insights and guidance for the design of haptic displays for electronic travel aids for the visually impaired.
How colorful! A feature it is, isn't it?
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz
2015-01-01
A display's color subpixel geometry provides an intriguing opportunity for improving readability of text. True type fonts can be positioned at the precision of subpixel resolution. With such a constraint in mind, how does one need to design font characteristics? On the other hand, display manufactures try hard in addressing the color display's dilemma: smaller pixel pitch and larger display diagonals strongly increase the total number of pixels. Consequently, cost of column and row drivers as well as power consumption increase. Perceptual color subpixel rendering using color component subsampling may save about 1/3 of color subpixels (and reduce power dissipation). This talk will try to elaborate the following questions, based on simulation of several different layouts of subpixel matrices: Up to what level are display device constraints compatible with software specific ideas of rendering text? How much of color contrast will remain? How to best consider preferred viewing distance for readability of text? How much does visual acuity vary at 20/20 vision? Can simplified models of human visual color perception be easily applied to text rendering on displays? How linear is human visual contrast perception around band limit of a display's spatial resolution? How colorful does the rendered text appear on the screen? How much does viewing angle influence the performance of subpixel layouts and color subpixel rendering?
Volumetric depth peeling for medical image display
NASA Astrophysics Data System (ADS)
Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.
2006-01-01
Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.
Display gamma is an important factor in Web image viewing
NASA Astrophysics Data System (ADS)
Zhang, Xuemei; Lavin, Yingmei; Silverstein, D. Amnon
2001-06-01
We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
10. Historic photo of rendering of rocket engine test facility ...
10. Historic photo of rendering of rocket engine test facility complex, April 28, 1964. On file at NASA Plumbrook Research Center, Sandusky, Ohio. NASA GRC photo number C-69472. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH
47 CFR 101.1417 - Annual report.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Annual report. Each MVDDS licensee shall file with the Broadband Division of the Wireless... the calendar year; (2) The total hours of transmission service rendered during the calendar year to all subscribers; (3) The total hours of transmission service rendered during the calendar year...
40 CFR 432.100 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AND POULTRY PRODUCTS POINT SOURCE CATEGORY Renderers § 432.100 Applicability. This part applies to discharges of process wastewater resulting from the production of meat meal, dried animal by-product residues (tankage), animal oils, grease and tallow, and in some cases hide curing, by a renderer. ...
Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan
1996-01-01
A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.
Validation of Thermal Lethality against Salmonella enterica in Poultry Offal during Rendering.
Jones-Ibarra, Amie-Marie; Acuff, Gary R; Alvarado, Christine Z; Taylor, T Matthew
2017-09-01
Recent outbreaks of human disease following contact with companion animal foods cross-contaminated with enteric pathogens, such as Salmonella enterica, have resulted in increased concern regarding the microbiological safety of animal foods. Additionally, the U.S. Food and Drug Administration Food Safety Modernization Act and its implementing rules have stipulated the implementation of current good manufacturing practices and food safety preventive controls for livestock and companion animal foods. Animal foods and feeds are sometimes formulated to include thermally rendered animal by-product meals. The objective of this research was to determine the thermal inactivation of S. enterica in poultry offal during rendering at differing temperatures. Raw poultry offal was obtained from a commercial renderer and inoculated with a mixture of Salmonella serovars Senftenberg, Enteritidis, and Gallinarum (an avian pathogen) prior to being subjected to heating at 150, 155, or 160°F (65.5, 68.3, or 71.1°C) for up to 15 min. Following heat application, surviving Salmonella bacteria were enumerated. Mean D-values for the Salmonella cocktail at 150, 155, and 160°F were 0.254 ± 0.045, 0.172 ± 0.012, and 0.086 ± 0.004 min, respectively, indicative of increasing susceptibility to increased application of heat during processing. The mean thermal process constant (z-value) was 21.948 ± 3.87°F. Results indicate that a 7.0-log-cycle inactivation of Salmonella may be obtained from the cumulative lethality encountered during the heating come-up period and subsequent rendering of raw poultry offal at temperatures not less than 150°F. Current poultry rendering procedures are anticipated to be effective for achieving necessary pathogen control when completed under sanitary conditions.
Research on techniques for computer three-dimensional simulation of satellites and night sky
NASA Astrophysics Data System (ADS)
Yan, Guangwei; Hu, Haitao
2007-11-01
To study space attack-defense technology, a simulation of satellites is needed. We design and implement a 3d simulating system of satellites. The satellites are rendered under the Night sky background. The system structure is as follows: one computer is used to simulate the orbital of satellites, the other computers are used to render 3d simulation scene. To get a realistic effect, a three-channel multi-projector display system is constructed. We use MultiGen Creator to construct satellite and star models. We use MultiGen Distributed Vega to render the three-channel scene. There are one master and three slaves. The master controls the three slaves to render three channels separately. To get satellites' positions and attitudes, the master communicates with the satellite orbit simulator based on TCP/IP protocol. Then it calculates the observer's position, the satellites' position, the moon's and the sun's position and transmits the data to the slaves. To get a smooth orbit of target satellites, an orbit prediction method is used. Because the target satellite data packets and the attack satellite data packets cannot keep synchronization in the network, a target satellite dithering phenomenon will occur when the scene is rendered. To resolve this problem, an anti-dithering algorithm is designed. To render Night sky background, a file which stores stars' position and brightness data is used. According to the brightness of each star, the stars are classified into different magnitude. The star model is scaled according to the magnitude. All the stars are distributed on a celestial sphere. Experiments show, the whole system can run correctly, and the frame rate can reach 30Hz. The system can be used in a space attack-defense simulation field.
36. Photocopy of detail of ink and wash rendering by ...
36. Photocopy of detail of ink and wash rendering by N. G. Starkwether in collection of Mr. & Mrs. Richard T. Pratt, Camden ELEVATIONS OF ITALIAN VILLA FOR WILLIAM C. PRATT - CAMDEN PLACE - DRIVE FRONT - Camden, Rappahannock River, Port Royal, Caroline County, VA
35. Photocopy of detail of ink and wash rendering by ...
35. Photocopy of detail of ink and wash rendering by N. G. Starkwether in collection of Mr. & Mrs. Richard T. Pratt, Camden ELEVATIONS OF ITALIAN VILLA FOR WILLIAM C. PRATT - CAMDEN PLACE - RIVER FRONT - Camden, Rappahannock River, Port Royal, Caroline County, VA
11. Historic photo of cutaway rendering of rocket engine test ...
11. Historic photo of cutaway rendering of rocket engine test facility complex, June 11, 1965. On file at NASA Plumbrook Research Center, Sandusky, Ohio. NASA GRC photo number C-74433. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH
49 CFR 511.78 - Prohibited communications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the date of issuance of a complaint and ending upon final NHTSA action in the matter. (b) Definitions. (1) “Decision-maker” means those NHTSA personnel who render decisions in adjudicative proceedings under this part, or who advise officials who render such decisions, including: (i) The Administrator...
7 CFR 54.1016 - Advance information concerning service rendered.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Advance information concerning service rendered. 54..., Processing, and Packaging of Livestock and Poultry Products § 54.1016 Advance information concerning service... applicant under the regulations, or other notification concerning the determination of compliance of...
Three-Dimensional Reconstruction of Thoracic Structures: Based on Chinese Visible Human
Luo, Na; Tan, Liwen; Fang, Binji; Li, Ying; Xie, Bing; Liu, Kaijun; Chu, Chun; Li, Min
2013-01-01
We managed to establish three-dimensional digitized visible model of human thoracic structures and to provide morphological data for imaging diagnosis and thoracic and cardiovascular surgery. With Photoshop software, the contour line of lungs and mediastinal structures including heart, aorta and its ramus, azygos vein, superior vena cava, inferior vena cava, thymus, esophagus, diaphragm, phrenic nerve, vagus nerve, sympathetic trunk, thoracic vertebrae, sternum, thoracic duct, and so forth were segmented from the Chinese Visible Human (CVH)-1 data set. The contour data set of segmented thoracic structures was imported to Amira software and 3D thorax models were reconstructed via surface rendering and volume rendering. With Amira software, surface rendering reconstructed model of thoracic organs and its volume rendering reconstructed model were 3D reconstructed and can be displayed together clearly and accurately. It provides a learning tool of interpreting human thoracic anatomy and virtual thoracic and cardiovascular surgery for medical students and junior surgeons. PMID:24369489
Vertex shading of the three-dimensional model based on ray-tracing algorithm
NASA Astrophysics Data System (ADS)
Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan
2016-10-01
Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.
Radiometric spectral and band rendering of targets using anisotropic BRDFs and measured backgrounds
NASA Astrophysics Data System (ADS)
Hilgers, John W.; Hoffman, Jeffrey A.; Reynolds, William R.; Jafolla, James C.
2000-07-01
Achievement of ultra-high fidelity signature modeling of targets requires a significant level of complexity for all of the components required in the rendering process. Specifically, the reflectance of the surface must be described using the bi-directional distribution function (BRDF). In addition, the spatial representation of the background must be high fidelity. A methodology and corresponding model for spectral and band rendering of targets using both isotropic and anisotropic BRDFs is presented. In addition, a set of tools will be described for generating theoretical anisotropic BRDFs and for reducing data required for a description of an anisotropic BRDF by 5 orders of magnitude. This methodology is hybrid using a spectrally measured panoramic of the background mapped to a large hemisphere. Both radiosity and ray-tracing approaches are incorporated simultaneously for a robust solution. In the thermal domain the spectral emission is also included in the solution. Rendering examples using several BRDFs will be presented.
A laparoscopy-based method for BRDF estimation from in vivo human liver.
Nunes, A L P; Maciel, A; Cavazzola, L T; Walter, M
2017-01-01
While improved visual realism is known to enhance training effectiveness in virtual surgery simulators, the advances on realistic rendering for these simulators is slower than similar simulations for man-made scenes. One of the main reasons for this is that in vivo data is hard to gather and process. In this paper, we propose the analysis of videolaparoscopy data to compute the Bidirectional Reflectance Distribution Function (BRDF) of living organs as an input to physically based rendering algorithms. From the interplay between light and organic matter recorded in video images, we propose the definition of a process capable of establishing the BRDF for inside-the-body organic surfaces. We present a case study around the liver with patient-specific rendering under global illumination. Results show that despite the limited range of motion allowed within the body, the computed BRDF presents a high-coverage of the sampled regions and produces plausible renderings. Copyright © 2016 Elsevier B.V. All rights reserved.
Space-time light field rendering.
Wang, Huamin; Sun, Mingxuan; Yang, Ruigang
2007-01-01
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.
BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.
Abrahamsson, Erik; Plotkin, Steven S
2009-09-01
Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B
2007-05-01
The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.
Acoustic Holographic Rendering with Two-dimensional Metamaterial-based Passive Phased Array
Xie, Yangbo; Shen, Chen; Wang, Wenqi; Li, Junfei; Suo, Dingjie; Popa, Bogdan-Ioan; Jing, Yun; Cummer, Steven A.
2016-01-01
Acoustic holographic rendering in complete analogy with optical holography are useful for various applications, ranging from multi-focal lensing, multiplexed sensing and synthesizing three-dimensional complex sound fields. Conventional approaches rely on a large number of active transducers and phase shifting circuits. In this paper we show that by using passive metamaterials as subwavelength pixels, holographic rendering can be achieved without cumbersome circuitry and with only a single transducer, thus significantly reducing system complexity. Such metamaterial-based holograms can serve as versatile platforms for various advanced acoustic wave manipulation and signal modulation, leading to new possibilities in acoustic sensing, energy deposition and medical diagnostic imaging. PMID:27739472
[Registration and 3D rendering of serial tissue section images].
Liu, Zhexing; Jiang, Guiping; Dong, Wu; Zhang, Yu; Xie, Xiaomian; Hao, Liwei; Wang, Zhiyuan; Li, Shuxiang
2002-12-01
It is an important morphological research method to reconstruct the 3D imaging from serial section tissue images. Registration of serial images is a key step to 3D reconstruction. Firstly, an introduction to the segmentation-counting registration algorithm is presented, which is based on the joint histogram. After thresholding of the two images to be registered, the criterion function is defined as counting in a specific region of the joint histogram, which greatly speeds up the alignment process. Then, the method is used to conduct the serial tissue image matching task, and lies a solid foundation for 3D rendering. Finally, preliminary surface rendering results are presented.
7 CFR 54.1016 - Advance information concerning service rendered.
Code of Federal Regulations, 2014 CFR
2014-01-01
... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... rendered. Upon request of any applicant, all or any part of the contents of any report issued to the...
7 CFR 54.1016 - Advance information concerning service rendered.
Code of Federal Regulations, 2013 CFR
2013-01-01
... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... rendered. Upon request of any applicant, all or any part of the contents of any report issued to the...
7 CFR 54.1016 - Advance information concerning service rendered.
Code of Federal Regulations, 2011 CFR
2011-01-01
... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... rendered. Upon request of any applicant, all or any part of the contents of any report issued to the...
7 CFR 54.1016 - Advance information concerning service rendered.
Code of Federal Regulations, 2012 CFR
2012-01-01
... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... rendered. Upon request of any applicant, all or any part of the contents of any report issued to the...
49 CFR 178.980 - Stacking test.
Code of Federal Regulations, 2013 CFR
2013-10-01
... for transportation and no loss of contents. (2) For fiberboard or wooden Large Packagings, there may be no loss of contents and no permanent deformation that renders the whole Large Packaging, including... deterioration which renders the Large Packaging unsafe for transportation and no loss of contents. (4) For the...
49 CFR 178.980 - Stacking test.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Packaging unsafe for transportation and no loss of contents. (2) For flexible Large Packagings, there may be no deterioration which renders the Large Packaging unsafe for transportation and no loss of contents... required load, there is no permanent deformation to the Large Packaging which renders the whole Large...
49 CFR 178.980 - Stacking test.
Code of Federal Regulations, 2014 CFR
2014-10-01
... for transportation and no loss of contents. (2) For fiberboard or wooden Large Packagings, there may be no loss of contents and no permanent deformation that renders the whole Large Packaging, including... deterioration which renders the Large Packaging unsafe for transportation and no loss of contents. (4) For the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an individual. Voice... analysis, whether or not an opinion on honesty or dishonesty is specifically rendered. (2) The term lie... tests commonly referred to as “honesty” or “paper and pencil” tests, machine-scored or otherwise; and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an individual. Voice... analysis, whether or not an opinion on honesty or dishonesty is specifically rendered. (2) The term lie... tests commonly referred to as “honesty” or “paper and pencil” tests, machine-scored or otherwise; and...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-07
... association. Two of the commenters expressed concerns about farm animal welfare and general dissatisfaction... DEPARTMENT OF AGRICULTURE Animal and Plant Health Inspection Service 9 CFR Part 71 [Docket No... Rendering Establishments AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Final rule...
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering
Stone, John E.; Sherman, William R.; Schulten, Klaus
2016-01-01
Immersive molecular visualization provides the viewer with intuitive perception of complex structures and spatial relationships that are of critical interest to structural biologists. The recent availability of commodity head mounted displays (HMDs) provides a compelling opportunity for widespread adoption of immersive visualization by molecular scientists, but HMDs pose additional challenges due to the need for low-latency, high-frame-rate rendering. State-of-the-art molecular dynamics simulations produce terabytes of data that can be impractical to transfer from remote supercomputers, necessitating routine use of remote visualization. Hardware-accelerated video encoding has profoundly increased frame rates and image resolution for remote visualization, however round-trip network latencies would cause simulator sickness when using HMDs. We present a novel two-phase rendering approach that overcomes network latencies with the combination of omnidirectional stereoscopic progressive ray tracing and high performance rasterization, and its implementation within VMD, a widely used molecular visualization and analysis tool. The new rendering approach enables immersive molecular visualization with rendering techniques such as shadows, ambient occlusion lighting, depth-of-field, and high quality transparency, that are particularly helpful for the study of large biomolecular complexes. We describe ray tracing algorithms that are used to optimize interactivity and quality, and we report key performance metrics of the system. The new techniques can also benefit many other application domains. PMID:27747138
Kahrs, Lüder Alexander; Labadie, Robert Frederick
2013-01-01
Cadaveric dissection of temporal bone anatomy is not always possible or feasible in certain educational environments. Volume rendering using CT and/or MRI helps understanding spatial relationships, but they suffer in nonrealistic depictions especially regarding color of anatomical structures. Freely available, nonstained histological data sets and software which are able to render such data sets in realistic color could overcome this limitation and be a very effective teaching tool. With recent availability of specialized public-domain software, volume rendering of true-color, histological data sets is now possible. We present both feasibility as well as step-by-step instructions to allow processing of publicly available data sets (Visible Female Human and Visible Ear) into easily navigable 3-dimensional models using free software. Example renderings are shown to demonstrate the utility of these free methods in virtual exploration of the complex anatomy of the temporal bone. After exploring the data sets, the Visible Ear appears more natural than the Visible Human. We provide directions for an easy-to-use, open-source software in conjunction with freely available histological data sets. This work facilitates self-education of spatial relationships of anatomical structures inside the human temporal bone as well as it allows exploration of surgical approaches prior to cadaveric testing and/or clinical implementation. Copyright © 2013 S. Karger AG, Basel.
Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I
NASA Astrophysics Data System (ADS)
Gonthier, David L.; Veron, Harry
1998-04-01
A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.
Bio-inspired color image enhancement
NASA Astrophysics Data System (ADS)
Meylan, Laurence; Susstrunk, Sabine
2004-06-01
Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Government of the FRY (S&M) to aircraft authorized; aircraft and maritime safety. 586.515 Section 586.515... services rendered by the Government of the FRY (S&M) to aircraft authorized; aircraft and maritime safety... maritime traffic in international waters. ...
Code of Federal Regulations, 2010 CFR
2010-04-01
... service rendered to the customers of a taxpayer who is not in the trade or business of rendering... defined in paragraph (a) of this section) with respect to which the taxpayer establishes, to the satisfaction of the Commissioner or his delegate, that the resulting removal of any such barrier conforms a...
LOD-Sprite Technique for Accelerated Terrain Rendering
1999-01-01
includes limited parallax, is possible. Another category samples the full plenoptic function, resulting in 3D, 4D or even 5D image sprites [13, 10... Plenoptic modeling: An image- based rendering system. Computer Graphics (Proc. SIG- GRAPH ’95), pages 39–46, 1995. [19] P. Rademacher and G. Bishop
Computational Video for Collaborative Applications
2003-03-01
Plenoptic Modeling: An Image- Based Rendering System.” SIGGRAPH 95, 39-46. [18] McMillan, L. An Image-Based Approach to Three-Dimensional Computer... Plenoptic modeling and rendering from image sequences taken by hand-held camera. Proc. DAGM 99, pages 94–101. [8] Y. Horry, K. Anjyo, and K. Arai
Method of making nanostructured glass-ceramic waste forms
Gao, Huizhen; Wang, Yifeng; Rodriguez, Mark A.; Bencoe, Denise N.
2012-12-18
A method of rendering hazardous materials less dangerous comprising trapping the hazardous material in nanopores of a nanoporous composite material, reacting the trapped hazardous material to render it less volatile/soluble, sealing the trapped hazardous material, and vitrifying the nanoporous material containing the less volatile/soluble hazardous material.
38 CFR 17.125 - Where to file claims.
Code of Federal Regulations, 2014 CFR
2014-07-01
...: 38 U.S.C. 7304) (b) For services rendered in the Philippines. Claims for the expenses of care or services rendered in the Republic of the Philippines should be filed with the Department of Veterans Affairs Outpatient Clinic (358/00), 2201 Roxas Blvd., Pasay City, 1300, Republic of the Philippines. (c...
38 CFR 17.125 - Where to file claims.
Code of Federal Regulations, 2013 CFR
2013-07-01
...: 38 U.S.C. 7304) (b) For services rendered in the Philippines. Claims for the expenses of care or services rendered in the Republic of the Philippines should be filed with the Department of Veterans Affairs Outpatient Clinic (358/00), 2201 Roxas Blvd., Pasay City, 1300, Republic of the Philippines. (c...
38 CFR 17.125 - Where to file claims.
Code of Federal Regulations, 2012 CFR
2012-07-01
...: 38 U.S.C. 7304) (b) For services rendered in the Philippines. Claims for the expenses of care or services rendered in the Republic of the Philippines should be filed with the Department of Veterans Affairs Outpatient Clinic (358/00), 2201 Roxas Blvd., Pasay City, 1300, Republic of the Philippines. (c...
38 CFR 17.125 - Where to file claims.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: 38 U.S.C. 7304) (b) For services rendered in the Philippines. Claims for the expenses of care or services rendered in the Republic of the Philippines should be filed with the Department of Veterans Affairs Outpatient Clinic (358/00), 2201 Roxas Blvd., Pasay City, 1300, Republic of the Philippines. (c...
38 CFR 17.125 - Where to file claims.
Code of Federal Regulations, 2011 CFR
2011-07-01
...: 38 U.S.C. 7304) (b) For services rendered in the Philippines. Claims for the expenses of care or services rendered in the Republic of the Philippines should be filed with the Department of Veterans Affairs Outpatient Clinic (358/00), 2201 Roxas Blvd., Pasay City, 1300, Republic of the Philippines. (c...
27 CFR 19.996 - Withdrawal of spirits.
Code of Federal Regulations, 2010 CFR
2010-04-01
... alcohol fuel plant, they must be rendered unfit for beverage use as provided in this subpart. Spirits rendered unfit for beverage use (fuel alcohol) may be withdrawn free of tax from plant premises exclusively... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Withdrawal of spirits. 19...
40 CFR 164.91 - Accelerated decision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... decision. (a) General. The Administrative Law Judge, in his discretion, may at any time render an accelerated decision in favor of Respondent as to all or any portion of the proceeding, including dismissal... matter of law; or (8) Such other and further reasons as are just. (b) Effect. A decision rendered under...
9 CFR 319.703 - Rendered animal fat or mixture thereof.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Rendered animal fat or mixture thereof. 319.703 Section 319.703 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Fats, Oils, Shortenings...
9 CFR 319.703 - Rendered animal fat or mixture thereof.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Rendered animal fat or mixture thereof. 319.703 Section 319.703 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Fats, Oils, Shortenings...
69. PHOTOCOPY OF RENDERING OF PROPOSED OPEN VALLEY TREATMENT OF ...
69. PHOTOCOPY OF RENDERING OF PROPOSED OPEN VALLEY TREATMENT OF P STREET BEND, FROM U.S. CONGRESS. HOUSE. REPORT OF THE ROCK CREEK AND POTOMAC PARKWAY COMMISSION, 1916. HOUSE DOC. No. 1114, 64th CONG. 1st SESS. - Rock Creek & Potomac Parkway, Washington, District of Columbia, DC
A Comparative Study between U.S. and Brazilian Acquisition Regulations and Practices
2011-03-01
to describe continuous improvements efforts ( Render and Heizer , 2008). Caddick and Dale (1998) on their paper ‘The impact of quality management on...2002). Outsourcing in Edinburgh and the Lothians. European Journal of Purchasing and Supply Chain Management 8 (2) 83-95. Render , Barry; Heizer , Jay
Method of making nanostructured glass-ceramic waste forms
Gao, Huizhen; Wang, Yifeng; Rodriguez, Mark A.; Bencoe, Denise N.
2014-07-08
A waste form for and a method of rendering hazardous materials less dangerous is disclosed that includes fixing the hazardous material in nanopores of a nanoporous material, reacting the trapped hazardous material to render it less volatile/soluble, and vitrifying the nanoporous material containing the less volatile/soluble hazardous material.
GRACE-FO Spacecraft (Artist's Rendering)
2018-04-25
Artist's rendering of the twin spacecraft of the Gravity Recovery and Climate Experiment Follow-On (GRACE-FO) mission, scheduled to launch in May, 2018. GRACE-FO will track the evolution of Earth's water cycle by monitoring changes in the distribution of mass on Earth. https://photojournal.jpl.nasa.gov/catalog/PIA22431
5 CFR 5501.106 - Outside employment and other outside activities.
Code of Federal Regulations, 2011 CFR
2011-01-01
....C. 205, or from providing uncompensated advice or counsel to such person; or (C) Giving testimony... services by an employee, including the rendering of advice or consultation, which requires advanced... services means the provision of personal services by an employee, including the rendering of advice or...
Synthesis of Virtual Environments for Aircraft Community Noise Impact Studies
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Sullivan, Brenda M.
2005-01-01
A new capability has been developed for the creation of virtual environments for the study of aircraft community noise. It is applicable for use with both recorded and synthesized aircraft noise. When using synthesized noise, a three-stage process is adopted involving non-real-time prediction and synthesis stages followed by a real-time rendering stage. Included in the prediction-based source noise synthesis are temporal variations associated with changes in operational state, and low frequency fluctuations that are present under all operating conditions. Included in the rendering stage are the effects of spreading loss, absolute delay, atmospheric absorption, ground reflections, and binaural filtering. Results of prediction, synthesis and rendering stages are presented.
Paul, G
2008-09-01
Extreme rates of premature death prior to the advent of modern medicine, very low rates of premature death in First World nations with low rates of prayer, and the least flawed of a large series of clinical trials indicate that remote prayer is not efficacious in treating illness. Mass contamination of sample cohorts renders such clinical studies inherently ineffectual. The required supernatural and paranormal mechanisms render them implausible. The possibility that the latter are not benign, and the potentially adverse psychological impact of certain protocols, renders these medical trials unethical. Resources should no longer be wasted on medical efforts to detect the supernatural and paranormal.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
2011-01-01
Combined surface, structural and opto-electrical investigations are drawn from the chemically fashioned ZnO nanotubes and its heterostructure with p-GaN film. A strong correlation has been found between the formation of radiative surface defect states in the nanotubes and the pure cool white light possessing averaged eight color rendering index value of 96 with appropriate color temperature. Highly important deep-red color index value has been realized > 95 which has the capability to render and reproduce natural and vivid colors accurately. Diverse types of deep defect states and their relative contribution to the corresponding wavelengths in the broad emission band is suggested. PMID:21878100
Susceptibility of ATM-deficient pancreatic cancer cells to radiation.
Ayars, Michael; Eshleman, James; Goggins, Michael
2017-05-19
Ataxia telangiectasia mutated (ATM) is inactivated in a significant minority of pancreatic ductal adenocarcinomas and may be predictor of treatment response. We determined if ATM deficiency renders pancreatic cancer cells more sensitive to fractionated radiation or commonly used chemotherapeutics. ATM expression was knocked down in three pancreatic cancer cell lines using ATM-targeting shRNA. Isogenic cell lines were tested for sensitivity to several chemotherapeutic agents and radiation. DNA repair kinetics were analyzed in irradiated cells using the comet assay. We find that while rendering pancreatic cancer cells ATM-deficient did not significantly change their sensitivity to several chemotherapeutics, it did render them exquisitely sensitized to radiation. Pancreatic cancer ATM status may help predict response to radiotherapy.
Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I
2015-01-01
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
Code of Federal Regulations, 2011 CFR
2011-07-01
... machines. (3) All occupations involved in tankage or rendering of dead animals, animal offal, animal fats..., and hashing machines; and presses (except belly-rolling machines). Except, the provisions of this.... Rendering plants means establishments engaged in the conversion of dead animals, animal offal, animal fats...
16 CFR 1610.39 - Shipments under section 11(c) of the Act.
Code of Federal Regulations, 2014 CFR
2014-01-01
... duly authorized agent so as to render them not so highly flammable under the provisions of section 4 of.... 1610.39 Section 1610.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
16 CFR 1610.39 - Shipments under section 11(c) of the Act.
Code of Federal Regulations, 2010 CFR
2010-01-01
... duly authorized agent so as to render them not so highly flammable under the provisions of section 4 of.... 1610.39 Section 1610.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
16 CFR 1610.39 - Shipments under section 11(c) of the Act.
Code of Federal Regulations, 2011 CFR
2011-01-01
... duly authorized agent so as to render them not so highly flammable under the provisions of section 4 of.... 1610.39 Section 1610.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
16 CFR 1610.39 - Shipments under section 11(c) of the Act.
Code of Federal Regulations, 2012 CFR
2012-01-01
... duly authorized agent so as to render them not so highly flammable under the provisions of section 4 of.... 1610.39 Section 1610.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
Rendering Visible: Painting and Sexuate Subjectivity
ERIC Educational Resources Information Center
Daley, Linda
2015-01-01
In this essay, I examine Luce Irigaray's aesthetic of sexual difference, which she develops by extrapolating from Paul Klee's idea that the role of painting is to render the non-visible rather than represent the visible. This idea is the premise of her analyses of phenomenology and psychoanalysis and their respective contributions to understanding…
9 CFR 351.5 - Conditions of eligibility for certification service; review of applications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR...”); and such source materials will be rendered at the plant into technical animal fat eligible for export... the rendered technical animal fat described in paragraph (a)(1) will be identified and kept separated...
9 CFR 351.5 - Conditions of eligibility for certification service; review of applications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR...”); and such source materials will be rendered at the plant into technical animal fat eligible for export... the rendered technical animal fat described in paragraph (a)(1) will be identified and kept separated...
9 CFR 351.5 - Conditions of eligibility for certification service; review of applications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR...”); and such source materials will be rendered at the plant into technical animal fat eligible for export... the rendered technical animal fat described in paragraph (a)(1) will be identified and kept separated...
9 CFR 351.5 - Conditions of eligibility for certification service; review of applications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR...”); and such source materials will be rendered at the plant into technical animal fat eligible for export... the rendered technical animal fat described in paragraph (a)(1) will be identified and kept separated...
31 CFR 545.514 - Payments for services rendered by the Taliban to aircraft.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) SANCTIONS REGULATIONS Licenses, Authorizations and Statements of Licensing Policy § 545.514 Payments for services rendered by the Taliban to aircraft. (a) Specific licenses may be issued on a case-by-case basis.... (b) Specific licenses may be issued on a case-by-case basis for the exportation, reexportation, sale...
Potential Applicability of Just-In-Time Inventory Management Within the Navy.
1995-12-01
Naval Postgraduate School, Monterey, CA, 1992. 7. Heizer & Render , PRODUCTION AND OPERATION MANAGEMENT (3rd ED), Massachusetts, Simon & Schuster, Inc...applicable end item. If the failure of an item would render the end item inoperable, the item is assigned a Military Essentiality Code (MEC) of ŕ." If the
47 CFR 0.185 - Responsibilities of the bureaus and staff offices.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Responsibilities of the bureaus and staff... Responsibilities of the bureaus and staff offices. The head of each of the bureaus and staff offices, in rendering... matters which relate to the functions of their respective bureaus or staff offices. (c) To render such...
47 CFR 0.185 - Responsibilities of the bureaus and staff offices.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Responsibilities of the bureaus and staff... Responsibilities of the bureaus and staff offices. The head of each of the bureaus and staff offices, in rendering... matters which relate to the functions of their respective bureaus or staff offices. (c) To render such...
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
Interactive Molecular Graphics for Augmented Reality Using HoloLens.
Müller, Christoph; Krone, Michael; Huber, Markus; Biener, Verena; Herr, Dominik; Koch, Steffen; Reina, Guido; Weiskopf, Daniel; Ertl, Thomas
2018-06-13
Immersive technologies like stereo rendering, virtual reality, or augmented reality (AR) are often used in the field of molecular visualisation. Modern, comparably lightweight and affordable AR headsets like Microsoft's HoloLens open up new possibilities for immersive analytics in molecular visualisation. A crucial factor for a comprehensive analysis of molecular data in AR is the rendering speed. HoloLens, however, has limited hardware capabilities due to requirements like battery life, fanless cooling and weight. Consequently, insights from best practises for powerful desktop hardware may not be transferable. Therefore, we evaluate the capabilities of the HoloLens hardware for modern, GPU-enabled, high-quality rendering methods for the space-filling model commonly used in molecular visualisation. We also assess the scalability for large molecular data sets. Based on the results, we discuss ideas and possibilities for immersive molecular analytics. Besides more obvious benefits like the stereoscopic rendering offered by the device, this specifically includes natural user interfaces that use physical navigation instead of the traditional virtual one. Furthermore, we consider different scenarios for such an immersive system, ranging from educational use to collaborative scenarios.
Westlund, Harold B.; Meyer, Gary W.; Hunt, Fern Y.
2002-01-01
Computer rendering is used to simulate the appearance of lighted objects for applications in architectural design, for animation and simulation in the entertainment industry, and for display and design in the automobile industry. Rapid advances in computer graphics technology suggest that in the near future it will be possible to produce photorealistic images of coated surfaces from scattering data. This could enable the identification of important parameters in the coatings manufacturing process that lead to desirable appearance, and to the design of virtual surfaces by visualizing prospective coating formulations once their optical properties are known. Here we report the results of our work to produce visually and radiometrically accurate renderings of selected appearance attributes of sample coated surfaces. It required changes in the rendering programs, which in general are not designed to accept high quality optical and material measurements, and changes in the optical measurement protocols. An outcome of this research is that some current ASTM standards can be replaced or enhanced by computer based standards of appearance. PMID:27446729
Semantic layers for illustrative volume rendering.
Rautek, Peter; Bruckner, Stefan; Gröller, Eduard
2007-01-01
Direct volume rendering techniques map volumetric attributes (e.g., density, gradient magnitude, etc.) to visual styles. Commonly this mapping is specified by a transfer function. The specification of transfer functions is a complex task and requires expert knowledge about the underlying rendering technique. In the case of multiple volumetric attributes and multiple visual styles the specification of the multi-dimensional transfer function becomes more challenging and non-intuitive. We present a novel methodology for the specification of a mapping from several volumetric attributes to multiple illustrative visual styles. We introduce semantic layers that allow a domain expert to specify the mapping in the natural language of the domain. A semantic layer defines the mapping of volumetric attributes to one visual style. Volumetric attributes and visual styles are represented as fuzzy sets. The mapping is specified by rules that are evaluated with fuzzy logic arithmetics. The user specifies the fuzzy sets and the rules without special knowledge about the underlying rendering technique. Semantic layers allow for a linguistic specification of the mapping from attributes to visual styles replacing the traditional transfer function specification.
Matching rendered and real world images by digital image processing
NASA Astrophysics Data System (ADS)
Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume
2010-05-01
Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.
Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1999-01-01
Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A
1999-08-01
Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
29 CFR 1610.15 - Schedule of fees and method of payment for services rendered.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Schedule of fees and method of payment for services... of fees and method of payment for services rendered. (a) Fees shall be assessed in accordance with... request is made by an educational or noncommercial scientific institution, or a representative of the news...
An economic analysis of localized pollution: rendering emissions in a residential setting
J. Michael Bowker; H.F. MacDonald
1991-01-01
The contingent value method is employed to estimate economic damages to households resulting from rendering plant emissions in a small town. Household willingness to accept (WTA) and willingness to pay (WTP) are estimated individually and in aggregate. The influence of household characteristics on WTP and WTA is examined via regression models. The perception of health...
16 CFR § 1610.39 - Shipments under section 11(c) of the Act.
Code of Federal Regulations, 2013 CFR
2013-01-01
... finished by the undersigned or by a duly authorized agent so as to render them not so highly flammable...§ 1610.39 Section § 1610.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS... finishing or processing to render them not so highly flammable as to be dangerous when worn by individuals...
16 CFR 1611.39 - Shipments under section 11(c) of the act.
Code of Federal Regulations, 2011 CFR
2011-01-01
... undersigned or by a duly authorized agent so as to render them not so highly flammable under the provisions of.... 1611.39 Section 1611.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
16 CFR 1611.39 - Shipments under section 11(c) of the act.
Code of Federal Regulations, 2014 CFR
2014-01-01
... undersigned or by a duly authorized agent so as to render them not so highly flammable under the provisions of.... 1611.39 Section 1611.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
16 CFR 1611.39 - Shipments under section 11(c) of the act.
Code of Federal Regulations, 2012 CFR
2012-01-01
... undersigned or by a duly authorized agent so as to render them not so highly flammable under the provisions of.... 1611.39 Section 1611.39 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... processing to render them not so highly flammable as to be dangerous when worn by individuals, shall contain...
27 CFR 19.983 - Spirits rendered unfit for beverage use in the production process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the quantity of fuel alcohol produced and multiplying the resulting figure by the proof of each lot of... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Spirits rendered unfit for beverage use in the production process. 19.983 Section 19.983 Alcohol, Tobacco Products and Firearms...
27 CFR 19.985 - Record of spirits rendered unfit for beverage use.
Code of Federal Regulations, 2010 CFR
2010-04-01
... beverage use and the quantity of fuel alcohol manufactured (which may be given in wine gallons). (Sec. 807... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Record of spirits rendered unfit for beverage use. 19.985 Section 19.985 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS MEAT... BOD5 and TSS specified in paragraph (a) of this section were derived for a renderer which does not cure cattle hide. If a renderer does cure hide, the following formulas should be used to calculate BOD5 and...
YaQ: an architecture for real-time navigation and rendering of varied crowds.
Maïm, Jonathan; Yersin, Barbara; Thalmann, Daniel
2009-01-01
The YaQ software platform is a complete system dedicated to real-time crowd simulation and rendering. Fitting multiple application domains, such as video games and VR, YaQ aims to provide efficient algorithms to generate crowds comprising up to thousands of varied virtual humans navigating in large-scale, global environments.
2010-12-01
industry will yield a different learning curve slope. Table 4 ( Heizer & Render , PowerPoint presentation, 2008, slides 7–8) shows some examples of the...September 28, 2010 from http://www.globalsecurity.org/military/systems/ground/mrap-var.htm Heizer , J., & Render , B. (2008). Operations Management
1993-09-01
goal ( Heizer , Render , and Stair, 1993:94). Integer Prgronmming. Integer programming is a general purpose approach used to optimally solve job shop...Scheduling," Operations Research Journal. 29, No 4: 646-667 (July-August 1981). Heizer , Jay, Barry Render and Ralph M. Stair, Jr. Production and Operations
USDA-ARS?s Scientific Manuscript database
Rendered animal proteins are well suited for animal nutrition applications, but the market is maturing, and there is a need to develop new uses for these products. The objective of this study is to explore the possibility of using animal proteins as a nutrient source for industrial microorganism fe...
ERIC Educational Resources Information Center
Tidwell, Owen Alan
2011-01-01
Given the nature of the valuation task environment appraisers are often made aware of previous value opinions rendered by appraisers, commonly in the form of an historic appraisal. And, because an appraisal task involves the rendering of market value, a hypothetical, unobservable construct based on probabilities, direct feedback against this…
METRO-APEX Volume 15.1: Industrialist's Manual No. 5, Caesar's Rendering Plant. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 5 (Caesar's Rendering Plant) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an…
Apparatus for rendering at least a portion of a device inoperable and related methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Michael A.; Steffler, Eric D.; Hartenstein, Steven D.
2016-11-08
Apparatus for rendering at least a portion of a device inoperable may include a containment structure having a first compartment that is configured to receive a device therein and a movable member configured to receive a cartridge having reactant material therein. The movable member is configured to be inserted into the first compartment of the containment structure and to ignite the reactant material within the cartridge. Methods of rendering at least a portion of a device inoperable may include disposing the device into the first compartment of the containment structure, inserting the movable member into the first compartment of themore » containment structure, igniting the reactant material in the cartridge, and expelling molten metal onto the device.« less
Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1998-10-01
The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartz, W., E-mail: wojciech.bartz@ing.uni.wroc.pl; Filar, T.
Optical microscopic observations, scanning electron microscopy and microprobe with energy dispersive X-ray analysis, X-ray diffraction and differential thermal/thermogravimetric analysis allowed detailed characterization of rendering mortars from decorative details (figures of Saints) of a baroque building in Kozuchow (Lubuskie Voivodship, Western Poland). Two separate coats of rendering mortars have been distinguished, differing in composition of their filler. The under coat mortar has filler composed of coarse-grained siliceous sand, whereas the finishing one has much finer grained filler, dominated by a mixture of charcoal and Fe-smelting slag, with minor amounts of quartz grains. Both mortars have air-hardening binder composed of gypsum andmore » micritic calcite, exhibiting microcrystalline structure.« less
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
Sketchy Rendering for Information Visualization.
Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A
2012-12-01
We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.
Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles
2017-05-26
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.
Using FastX on the Peregrine System | High-Performance Computing | NREL
with full 3D hardware acceleration. The traditional method of displaying graphics applications to a remote X server (indirect rendering) supports 3D hardware acceleration, but this approach causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine. With
Code of Federal Regulations, 2014 CFR
2014-01-01
... (LE), Color Rendering Index (CRI), and Correlated Color Temperature (CCT) of Electric Lamps R Appendix R to Subpart B of Part 430 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS Test Procedures Pt. 430, Subpt. B, App. R Appendix R to Subpart B of Part...
Code of Federal Regulations, 2013 CFR
2013-01-01
... (LE), Color Rendering Index (CRI), and Correlated Color Temperature (CCT) of Electric Lamps R Appendix R to Subpart B of Part 430 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS Test Procedures Pt. 430, Subpt. B, App. R Appendix R to Subpart B of Part...
Code of Federal Regulations, 2011 CFR
2011-01-01
... (LE), Color Rendering Index (CRI), and Correlated Color Temperature (CCT) of Electric Lamps R Appendix R to Subpart B of Part 430 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS Test Procedures Pt. 430, Subpt. B, App. R Appendix R to Subpart B of Part...
Code of Federal Regulations, 2012 CFR
2012-01-01
... (LE), Color Rendering Index (CRI), and Correlated Color Temperature (CCT) of Electric Lamps R Appendix R to Subpart B of Part 430 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS Test Procedures Pt. 430, Subpt. B, App. R Appendix R to Subpart B of Part...
Proposal of a Framework for Internet Based Licensing of Learning Objects
ERIC Educational Resources Information Center
Santos, Osvaldo A.; Ramos, Fernando M. S.
2004-01-01
This paper presents a proposal of a framework whose main objective is to manage the delivery and rendering of learning objects in a digital rights controlled environment. The framework is based on a digital licensing scheme that requires each learning object to have the proper license in order to be rendered by a trusted player. A conceptual model…
Establishing the 3-D finite element solid model of femurs in partial by volume rendering.
Zhang, Yinwang; Zhong, Wuxue; Zhu, Haibo; Chen, Yun; Xu, Lingjun; Zhu, Jianmin
2013-01-01
It remains rare to report three-dimensional (3-D) finite element solid model of femurs in partial by volume rendering method, though several methods of femoral 3-D finite element modeling are already available. We aim to analyze the advantages of the modeling method by establishing the 3-D finite element solid model of femurs in partial by volume rendering. A 3-D finite element model of the normal human femurs, made up of three anatomic structures: cortical bone, cancellous bone and pulp cavity, was constructed followed by pretreatment of the CT original image. Moreover, the finite-element analysis was carried on different material properties, three types of materials given for cortical bone, six assigned for cancellous bone, and single for pulp cavity. The established 3-D finite element of femurs contains three anatomical structures: cortical bone, cancellous bone, and pulp cavity. The compressive stress primarily concentrated in the medial surfaces of femur, especially in the calcar femorale. Compared with whole modeling by volume rendering method, the 3-D finite element solid model created in partial is more real and fit for finite element analysis. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
A Single Swede Midge (Diptera: Cecidomyiidae) Larva Can Render Cauliflower Unmarketable.
Stratton, Chase A; Hodgdon, Elisabeth A; Zuckerman, Samuel G; Shelton, Anthony M; Chen, Yolanda H
2018-05-01
Swede midge, Contarinia nasturtii Kieffer (Diptera: Cecidomyiidae), is an invasive pest causing significant damage on Brassica crops in the Northeastern United States and Eastern Canada. Heading brassicas, like cauliflower, appear to be particularly susceptible. Swede midge is difficult to control because larvae feed concealed inside meristematic tissues of the plant. In order to develop damage and marketability thresholds necessary for integrated pest management, it is important to determine how many larvae render plants unmarketable and whether the timing of infestation affects the severity of damage. We manipulated larval density (0, 1, 3, 5, 10, or 20) per plant and the timing of infestation (30, 55, and 80 d after seeding) on cauliflower in the lab and field to answer the following questions: 1) What is the swede midge damage threshold? 2) How many swede midge larvae can render cauliflower crowns unmarketable? and 3) Does the age of cauliflower at infestation influence the severity of damage and marketability? We found that even a single larva can cause mild twisting and scarring in the crown rendering cauliflower unmarketable 52% of the time, with more larvae causing more severe damage and additional losses, regardless of cauliflower age at infestation.
Color analysis and image rendering of woodblock prints with oil-based ink
NASA Astrophysics Data System (ADS)
Horiuchi, Takahiko; Tanimoto, Tetsushi; Tominaga, Shoji
2012-01-01
This paper proposes a method for analyzing the color characteristics of woodblock prints having oil-based ink and rendering realistic images based on camera data. The analysis results of woodblock prints show some characteristic features in comparison with oil paintings: 1) A woodblock print can be divided into several cluster areas, each with similar surface spectral reflectance; and 2) strong specular reflection from the influence of overlapping paints arises only in specific cluster areas. By considering these properties, we develop an effective rendering algorithm by modifying our previous algorithm for oil paintings. A set of surface spectral reflectances of a woodblock print is represented by using only a small number of average surface spectral reflectances and the registered scaling coefficients, whereas the previous algorithm for oil paintings required surface spectral reflectances of high dimension at all pixels. In the rendering process, in order to reproduce the strong specular reflection in specific cluster areas, we use two sets of parameters in the Torrance-Sparrow model for cluster areas with or without strong specular reflection. An experiment on a woodblock printing with oil-based ink was performed to demonstrate the feasibility of the proposed method.
Enhanced backgrounds in scene rendering with GTSIMS
NASA Astrophysics Data System (ADS)
Prussing, Keith F.; Pierson, Oliver; Cordell, Chris; Stewart, John; Nielson, Kevin
2018-05-01
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar
2005-05-01
3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.
NASA Astrophysics Data System (ADS)
Bi, Ke; Wang, Dan; Wang, Peng; Duan, Bin; Zhang, Tieqiang; Wang, Yinghui; Zhang, Hanzhuang; Zhang, Yu
2017-05-01
White light-emitting diodes (WLEDs) were fabricated by employing a combination of a commercial yellow emission Ce3+-doped Y3Al5O12 (YAG:Ce)-based phosphor and all-inorganic perovskite quantum dots pumped with blue LED chip. Perovskite quantum dot solution was used as the color conversion layer with liquid-type structure. Red-emitting materials based on cesium lead halide (CsPb(X)3) perovskite quantum dots were introduced to generate WLEDs with high efficacy and high color rendering index through compensating the red emission of the YAG:Ce phosphor-based commercialized WLEDs. The experimental results suggested that the luminous efficiency and color rendering index of the as-prepared WLED device could reach up to 84.7 lm/W and 89, respectively. The characteristics of those devices including correlated color temperature (CCT), color rendering index (CRI), and color coordinates were observed under different forward currents. The as-fabricated warm WLEDs showed excellent color stability against the increasing current, while the color coordinates shifted slightly from (0.3837, 0.3635) at 20 mA to (0.3772, 0.3592) at 120 mA and color temperature tuned from 3803 to 3953 K.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Rolland, Jannick; Ha, Yonggang; Fidopiastis, Cali
2004-06-01
A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
High Performance GPU-Based Fourier Volume Rendering.
Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr
2015-01-01
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)logN) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu
1999-01-01
We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.
Mekonnen, Tizazu; Mussone, Paolo; Bressler, David
2016-01-01
Over the past decades, strong global demand for industrial chemicals, raw materials and energy has been driven by rapid industrialization and population growth across the world. In this context, long-term environmental sustainability demands the development of sustainable strategies of resource utilization. The agricultural sector is a major source of underutilized or low-value streams that accompany the production of food and other biomass commodities. Animal agriculture in particular constitutes a substantial portion of the overall agricultural sector, with wastes being generated along the supply chain of slaughtering, handling, catering and rendering. The recent emergence of bovine spongiform encephalopathy (BSE) resulted in the elimination of most of the traditional uses of rendered animal meals such as blood meal, meat and bone meal (MBM) as animal feed with significant economic losses for the entire sector. The focus of this review is on the valorization progress achieved on converting protein feedstock into bio-based plastics, flocculants, surfactants and adhesives. The utilization of other rendering streams such as fat and ash rich biomass for the production of renewable fuels, solvents, drop-in chemicals, minerals and fertilizers is also critically reviewed.
NASA Astrophysics Data System (ADS)
Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin
2000-05-01
The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.
Design considerations for parallel graphics libraries
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1994-01-01
Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.
A 3D ultrasound scanner: real time filtering and rendering algorithms.
Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M
1997-01-01
The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Generalized pipeline for preview and rendering of synthetic holograms
NASA Astrophysics Data System (ADS)
Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.
1997-04-01
We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.
Rendering LGBTQ+ Visible in Nursing: Embodying the Philosophy of Caring Science.
Goldberg, Lisa; Rosenburg, Neal; Watson, Jean
2017-06-01
Although health care institutions continue to address the importance of diversity initiatives, the standard(s) for treatment remain historically and institutionally grounded in a sociocultural privileging of heterosexuality. As a result, lesbian, gay, bisexual, transgender, and queer (LGBTQ+) communities in health care remain largely invisible. This marked invisibility serves as a call to action, a renaissance of thinking within redefined boundaries and limitations. We must therefore refocus our habits of attention on the wholeness of persons and the diversity of their storied experiences as embodied through contemporary society. By rethinking current understandings of LGBTQ+ identities through innovative representation(s) of the media, music industry, and pop culture within a caring science philosophy, nurses have a transformative opportunity to render LGBTQ+ visible and in turn render a transformative opportunity for themselves.
Modelling Extortion Racket Systems: Preliminary Results
NASA Astrophysics Data System (ADS)
Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria
Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.
Automatic Perceptual Color Map Generation for Realistic Volume Visualization
Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor
2008-01-01
Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609
3D in the Fast Lane: Render as You Go with the Latest OpenGL Boards.
ERIC Educational Resources Information Center
Sauer, Jeff; Murphy, Sam
1997-01-01
NT OpenGL hardware allows modelers and animators to work at relatively inexpensive NT workstations in their own offices or homes previous to shared space and workstation time in expensive studios. Rates seven OpenGL boards and two QuickDraw 3D accelerator boards for Mac users on overall value, wireframe and texture rendering, 2D acceleration, and…
ERIC Educational Resources Information Center
Cook-Sather, Alison; Abbot, Sophia
2016-01-01
Linguistic, literary, and feminist studies define translation as a process of rendering a new version of an original with attention to context, power, and purpose. Processes of translation in the context of student-faculty co-inquiry in the Scholarship of Teaching and Learning offer examples of how this re-rendering can play out in the realm of…
ERIC Educational Resources Information Center
Meihua, Song
2014-01-01
How to render culture-bound elements into a foreign language remains one of the most challenging tasks for all translators, especially, when the source text is a literary one. To retain the aesthetic effects and other stylistic features of importance, some argue that choice can be made from either domestication or foreignization with…
Signature modelling and radiometric rendering equations in infrared scene simulation systems
NASA Astrophysics Data System (ADS)
Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian
2011-11-01
The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.
Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study.
Dunn, B E; Almagro, U A; Choi, H; Sheth, N K; Arnold, J S; Recla, D L; Krupinski, E A; Graham, A R; Weinstein, R S
1997-01-01
In this retrospective study, we assess the accuracy, confidence levels, and viewing times of two generalist pathologists using both dynamic-robotic telepathology and conventional light microscopy (LM) to render diagnoses on a test set of 100 consecutive routine surgical pathology cases. The objective is to determine whether telepathology will allow a pathology group practice at a diagnostic hub to provide routine diagnostic services to a remote hospital without an on-site pathologist. For TP, glass slides were placed on the motorized stage of the robotic microscope of a telepathology system by a senior laboratory technologist in Iron Mountain, MI. Real-time control of the motorized microscope was then transferred to a pathologist in Milwaukee, WI, who viewed images of the glass slides on a video monitor. The telepathologists deferred rendering a diagnosis in 1.5% of cases. Clinically important concordance between the individual diagnoses rendered by telepathology and the "truth" diagnoses established by rereview of glass slides was 98.5%. In the telepathology mode, there were five incorrect diagnoses out of a total of 197 diagnoses. In four cases in which the telepathology diagnosis was incorrect, the pathologist's diagnosis by LM was identical to that rendered by telepathology. These represent errors of interpretation and cannot be ascribed to telepathology. The certainty of the pathologists with respect to their diagnoses was evaluated over time. Results for the first 50 cases served as baseline data. For the second 50 cases, confidence in rendering a diagnosis in the telepathology mode was essentially identical to that of making a diagnosis in the LM viewing mode. Viewing times in the telepathology mode also improved with more experience using the telepathology system. These results support the concept that an off-site pathologist using dynamic-robotic telepathology can substitute for an on-site pathologist as a service provider.
Gong, Chao; Jiang, Xiuping
2015-08-01
Hydrogen sulfide producing bacteria (SPB) in raw animal by-products are likely to grow and form biofilms in the rendering processing environments, resulting in the release of harmful hydrogen sulfide (H2S) gas. The objective of this study was to reduce SPB biofilms formed on different surfaces typically found in rendering plants by applying a bacteriophage cocktail. Using a 96-well microplate method, we determined that 3 SPB strains of Citrobacter freundii and Hafnia alvei are strong biofilm formers. Application of 9 bacteriophages (10(7) PFU/mL) from families of Siphoviridae and Myoviridae resulted in a 33%-70% reduction of biofilm formation by each SPB strain. On stainless steel and plastic templates, phage treatment (10(8) PFU/mL) reduced the attached cells of a mixed SPB culture (no biofilm) by 2.3 and 2.7 log CFU/cm(2) within 6 h at 30 °C, respectively, as compared with 2 and 1.5 log CFU/cm(2) reductions of SPB biofilms within 6 h at 30 °C. Phage treatment was also applied to indigenous SPB biofilms formed on the environmental surface, stainless steel, high-density polyethylene plastic, and rubber templates in a rendering plant. With phage treatment (10(9) PFU/mL), SPB biofilms were reduced by 0.7-1.4, 0.3-0.6, and 0.2-0.6 log CFU/cm(2) in spring, summer, and fall trials, respectively. Our study demonstrated that bacteriophages could effectively reduce the selected SPB strains either attached to or in formed biofilms on various surfaces and could to some extent reduce the indigenous SPB biofilms on the surfaces in the rendering environment.
Ozone inactivation of infectious prions in rendering plant and municipal wastewaters.
Ding, Ning; Neumann, Norman F; Price, Luke M; Braithwaite, Shannon L; Balachandran, Aru; Belosevic, Miodrag; Gamal El-Din, Mohamed
2014-02-01
Disposal of tissues and organs associated with prion accumulation and infectivity in infected animals (designated as Specified Risk Materials [SRM]) is strictly regulated by the Canadian Food Inspection Agency (CFIA); however, the contamination of wastewater from slaughterhouses that handle SRM still poses public concern. In this study, we examined for the first time the partitioning of infectious prions in rendering plant wastewater and found that a large proportion of infectious prions were partitioned into the scum layer formed at the top after gravity separation, while quite a few infectious prions still remained in the wastewater. Subsequently, we assessed the ozone inactivation of infectious prions in the raw, natural gravity-separated and dissolved air flotation (DAF)-treated (i.e., primary-treated) rendering plant wastewater, and in a municipal final effluent (i.e., secondary-treated municipal wastewater). At applied ozone doses of 43.4-44.6 mg/L, ozone was instantaneously depleted in the raw rendering plant wastewater, while a greater than 4-log10 inactivation was achieved at a 5 min exposure in the DAF-treated rendering plant wastewater. Prion inactivation in the municipal final effluent was conducted with two levels of applied ozone doses of 13.4 and 22.5mg/L, and a greater than 4-log10 inactivation was achieved at a 5 min exposure with the higher ozone dose. Efficiency factor Hom (EFH) models were used to model (i.e., fit) the experimental data. The CT (disinfectant concentration multiplied by contact time) values were determined for 2- and 3-log10 inactivation in the municipal final effluent treated with an ozone dose of 13.4 mg/L. Our results indicate that ozone could serve as a final barrier for prion inactivation in primary- and/or secondary-treated wastewaters. © 2013.
Thermographic inspection of external thermal insulation systems with mechanical fixing
NASA Astrophysics Data System (ADS)
Simões, Nuno; Simões, Inês; Serra, Catarina; Tadeu, António
2015-05-01
An External Thermal Insulation Composite System (ETICS) kit may include anchors to mechanically fix the insulation product onto the wall. Using this option increases safety when compared to a simple bonded solution, however, it is more expensive and needs higher labor resources. The insulation product is then coated with rendering, which applied to the insulation material without any air gap. The rendering comprises one or more layers of coats with an embedded reinforcement. The most common multi-coat rendering system presents a base coat applied directly to the insulation product with a glass fiber mesh as reinforcement, followed by a second base coat, before a very thin coat (key coat) that prepares the surface to receive the finishing and decorative coat. The thickness of the rendering system may vary between around 5 to 10 mm. The higher thicknesses may be associated with a reinforcement composed by two layers of glass fiber mesh. The main purpose of this work is to apply infrared thermography (IRT) techniques to 2 ETICS solution (single or double layer of glass fiber mesh) and evaluate its capability in the detection of anchors. The reliability of IRT was tested using an ETICS configuration of expanded cork boards and a rendering system with one or two layers of glass fiber mesh. An active thermography approach was performed in laboratory conditions, in transmission and reflection mode. In the reflection mode halogen lamps and air heater were employed as the thermal stimulus. Air heater was also the source used in the transmission mode tests. The resulting data was processed in both time and frequency domains. In this last approach, phase contrast images were generated and studied.
NASA Astrophysics Data System (ADS)
Wan, Weibing; Shi, Pengfei; Li, Shuguang
2009-10-01
Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.
DspaceOgreTerrain 3D Terrain Visualization Tool
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.
2012-01-01
DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
Thin boron phosphide coating as a corrosion-resistant layer
Not Available
1982-08-25
A surface prone to corrosion in corrosive environments is rendered anticorrosive by CVD growing a thin continuous film, e.g., having no detectable pinholes, thereon, of boron phosphide. In one embodiment, the film is semiconductive. In another aspect, the invention is an improved photoanode, and/or photoelectrochemical cell with a photoanode having a thin film of boron phosphide thereon rendering it anticorrosive, and providing it with unexpectedly improved photoresponsive properties.
Popova, I I; Orlov, O I; Matsnev, E I; Revyakin, Yu G
2016-01-01
The paper reports the results of testing some diagnostic video systems enabling digital rendering of TNT teeth and jaws. The authors substantiate the criteria of choosing and integration of imaging systems in future on Russian segment of the International space station kit LOR developed for examination and download of high-quality images of cosmonauts' TNT, parodentium and teeth.
Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering
NASA Astrophysics Data System (ADS)
Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi
2015-04-01
This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.
Lin, Yanping; Chen, Huajiang; Yu, Dedong; Zhang, Ying; Yuan, Wen
2017-01-01
Bone drilling simulators with virtual and haptic feedback provide a safe, cost-effective and repeatable alternative to traditional surgical training methods. To develop such a simulator, accurate haptic rendering based on a force model is required to feedback bone drilling forces based on user input. Current predictive bone drilling force models based on bovine bones with various drilling conditions and parameters are not representative of the bone drilling process in bone surgery. The objective of this study was to provide a bone drilling force model for haptic rendering based on calibration and validation experiments in fresh cadaveric bones with different bone densities. Using a commonly used drill bit geometry (2 mm diameter), feed rates (20-60 mm/min) and spindle speeds (4000-6000 rpm) in orthognathic surgeries, the bone drilling forces of specimens from two groups were measured and the calibration coefficients of the specific normal and frictional pressures were determined. The comparison of the predicted forces and the measured forces from validation experiments with a large range of feed rates and spindle speeds demonstrates that the proposed bone drilling forces can predict the trends and average forces well. The presented bone drilling force model can be used for haptic rendering in surgical simulators.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
A transparently scalable visualization architecture for exploring the universe.
Fu, Chi-Wing; Hanson, Andrew J
2007-01-01
Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.
Realtime Compositing of Procedural Facade Textures on the Gpu
NASA Astrophysics Data System (ADS)
Krecklau, L.; Kobbelt, L.
2011-09-01
The real time rendering of complex virtual city models has become more important in the last few years for many practical applications like realistic navigation or urban planning. For maximum rendering performance, the complexity of the geometry or textures can be reduced by decreasing the resolution until the data set can fully reside on the memory of the graphics card. This typically results in a low quality of the virtual city model. Alternatively, a streaming algorithm can load the high quality data set from the hard drive. However, this approach requires a large amount of persistent storage providing several gigabytes of static data. We present a system that uses a texture atlas containing atomic tiles like windows, doors or wall patterns, and that combines those elements on-the-fly directly on the graphics card. The presented approach benefits from a sophisticated randomization approach that produces lots of different facades while the grammar description itself remains small. By using a ray casting apporach, we are able to trace through transparent windows revealing procedurally generated rooms which further contributes to the realism of the rendering. The presented method enables real time rendering of city models with a high level of detail for facades while still relying on a small memory footprint.
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
High-quality and interactive animations of 3D time-varying vector fields.
Helgeland, Anders; Elboth, Thomas
2006-01-01
In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.
Virtual Acoustics: Evaluation of Psychoacoustic Parameters
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.
Whole high-quality light environment for humans and plants
NASA Astrophysics Data System (ADS)
Sharakshane, Anton
2017-11-01
Plants sharing a single light environment on a spaceship with a human being and bearing a decorative function should look as natural and attractive as possible. And consequently they can be illuminated only with white light with a high color rendering index. Can lighting optimized for a human eye be effective and appropriate for plants? Spectrum-based effects have been compared under artificial lighting of plants by high-pressure sodium lamps and general-purpose white LEDs. It has been shown that for the survey sample phytochrome photo-equilibria does not depend significantly on the parameters of white LED light, while the share of phytoactive blue light grows significantly as the color temperature increases. It has been revealed that yield photon flux is proportional to luminous efficacy and increases as the color temperature decreases, general color rendering index Ra and the special color rendering index R14 (green leaf) increase. General-purpose white LED lamps with a color temperature of 2700 K, Ra > 90 and luminous efficacy of 100 lm/W are as efficient as the best high-pressure sodium lamps, and at a higher luminous efficacy their yield photon flux per joule is even bigger in proportion. Here we show that demand for high color rendering white LED light is not contradictory to the agro-technical objectives.
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
First responder and physician liability during an emergency.
Eddy, Amanda
2013-01-01
First responders, especially emergency medical technicians and paramedics, along with physicians, will be expected to render care during a mass casualty event. It is highly likely that these medical first responders and physicians will be rendering care in suboptimal conditions due to the mass casualty event. Furthermore, these individuals are expected to shift their focus from individually based care to community- or population-based care when assisting disaster response. As a result, patients may feel they have not received adequate care and may seek to hold the medical first responder or physician liable, even if they did everything they could given the emergency circumstances. Therefore, it is important to protect medical first responders and physicians rendering care during a mass casualty event so that their efforts are not unnecessarily impeded by concerns about civil liability. In this article, the author looks at the standard of care for medical first responders and physicians and describes the current framework of laws limiting liability for these persons during an emergency. The author concludes that the standard of care and current laws fail to offer adequate liability protection for medical first responders and physicians, especially those in the private sector, and recommends that states adopt clear laws offering liability protection for all medical first responders and physicians who render assistance during a mass casualty event.
2013-12-01
Implementation of current NPS TPL design procedure that uses COTS software (MATLAB, SolidWorks, and ANSYS - CFX ) for the geometric rendering and...procedure that uses commercial-off-the-shelf software (MATLAB, SolidWorks, and ANSYS - CFX ) for the geometric rendering and analysis was modified and... CFX The CFD simulation program in ANSYS Workbench. CFX -Pre CFX boundary conditions and solver settings module. CFX -Solver CFX solver program. CFX
Artist Rendering of NASA Dawn Spacecraft Approaching Mars
2009-05-23
Artist rendering of NASA's Dawn spacecraft approaching Mars. Dawn, part of NASA's Discovery Program of competitively selected missions, was launched in 2007 to orbit the large asteroid Vesta and the dwarf planet Ceres. The two bodies have very different properties from each other. By observing them both with the same set of instruments, Dawn will probe the early solar system and specify the properties of each body. http://photojournal.jpl.nasa.gov/catalog/PIA18152
ERIC Educational Resources Information Center
Ekinci, Hatice
2014-01-01
This study was conducted in order to develop a valid and reliable scale that can be used in measuring self-efficacy of candidate music teachers in rendering piano education to children of 6-12 years. To this end, a pool of 51 items was created by using the literature, and taking the opinions of piano professors and piano instructors working with…
Computer Graphics Research Laboratory Quarterly Progress Report Number 49, July-September 1993
1993-11-22
20 Texture Sampling and Strength Guided Motion: Jeffry S. Nimeroff 23 21 Radiosity : Min-Zhi Shao 24 22 Blended Shape Primitives: Douglas DeCarlo 25 23...placement. "* Extensions of radiosity rendering. "* A discussion of blended shape primitives and the applications in computer vision and computer...user. Radiosity : An improved version of the radiosity renderer is included. This version uses a fast over- relaxation progressive refinement algorithm
ERIC Educational Resources Information Center
Commission on Civil Rights, Washington, DC.
Focusing on the extent and quality of services rendered to Negro rural families by the agencies of the Department of Agriculture, this study was conducted in counties where Negroes formed a significant portion of the varying potential clientele of the agencies. Research techniques used in the study included conferences and interviews with program…
Metropolitan Spokane Region Water Resources Study. Apendix C. Water Use
1976-01-01
rates of water ui foir the most recent period and on the sources of water with lesser emphasis on the existing phyical plant for water distribution...sanitary sewage flow to the City of Spokane sewage treatment plant . The indicated domestic component of use from the Vera I.D. and Washington Water...Chips Seven-Up Bottling Soft Drinks Spokane Industrial Park General Industry Spokane Rendering Rendering Plant Spokesman-Review Newspaper Printing
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
Portable haptic interface with omni-directional movement and force capability.
Avizzano, Carlo Alberto; Satler, Massimo; Ruffaldi, Emanuele
2014-01-01
We describe the design of a new mobile haptic interface that employs wheels for force rendering. The interface, consisting of an omni-directional Killough type platform, provides 2DOF force feedback with different control modalities. The system autonomously performs sensor fusion for localization and force rendering. This paper explains the relevant choices concerning the functional aspects, the control design, the mechanical and electronic solution. Experimental results for force feedback characterization are reported.
Nonlinear Detection, Estimation, and Control for Free-Space Optical Communication
2008-08-17
original message. The promising features of this communication scheme such as high-bandwidth, power efficiency, and security, render it a viable means...bandwidth, power efficiency, and security, render it a viable means for high data rate point-to-point communication. In this dissertation, we adopt a...Department of Electrical and Computer Engineering In free-space optical communication, the intensity of a laser beam is modulated by a message, the beam
Palmdale International Airport, Palmdale, California. Airport Development Program
1982-01-01
and ONT have rendered this system concept academic. • Concept B, described starting on page 209, is basically a reflection of the current situation...very different, and impacts the PIA in a very different manner. For example, the almost continuous use of the Complex 1 and Complex 4 MOAs will render ...been described in considerable detail by Underhill (n.d.), Strong (1929), and others ( Heizer 1978). Groups were subdivided into s~all bands
2004-03-01
predicting future events ( Heizer and Render , 1999). Forecasting techniques fall into two major categories, qualitative and quantitative methods...Globemaster III.” Excerpt from website. www.globalsecurity.org/military /systems/ aircraft/c-17-history.htm. 2003. Heizer , Jay, and Barry Render ...of the past data used to make the forecast ( Heizer , et. al., 1999). Explanatory forecasting models assume that the variable being forecasted
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.
2016-08-29
The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less
Methane potential of sterilized solid slaughterhouse wastes.
Pitk, Peep; Kaparaju, Prasad; Vilu, Raivo
2012-07-01
The aim of the current study was to determine chemical composition and methane potential of Category 2 and 3 solid slaughterhouse wastes rendering products (SSHWRP) viz. melt, decanter sludge, meat and bone meal (MBM), technical fat and flotation sludge from wastewater treatment. Chemical analyses showed that SSHWRP were high in protein and lipids with total solids (TS) content of 96-99%. Methane yields of the SSHWRP were between 390 and 978 m(3) CH(4)/t volatile solids (VS)(added). Based on batch experiments, anaerobic digestion of SSHWRP from the dry rendering process could recover 4.6 times more primary energy than the energy required for the rendering process. Estonia has technological capacity to sterilize all the produced Category 2 and 3 solid slaughterhouse wastes (SSHW) and if separated from Category 1 animal by-products (ABP), it could be further utilized as energy rich input material for anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
A review of haptic simulator for oral and maxillofacial surgery based on virtual reality.
Chen, Xiaojun; Hu, Junlei
2018-06-01
Traditional medical training in oral and maxillofacial surgery (OMFS) may be limited by its low efficiency and high price due to the shortage of cadaver resources. With the combination of visual rendering and feedback force, surgery simulators become increasingly popular in hospitals and medical schools as an alternative to the traditional training. Areas covered: The major goal of this review is to provide a comprehensive reference source of current and future developments of haptic OMFS simulators based on virtual reality (VR) for relevant researchers. Expert commentary: Visual rendering, haptic rendering, tissue deformation, and evaluation are key components of haptic surgery simulator based on VR. Compared with traditional medical training, virtual and tactical fusion of virtual environment in surgery simulator enables considerably vivid sensation, and the operators have more opportunities to practice surgical skills and receive objective evaluation as reference.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Rajendiran, Nivedita; Durrant, Jacob D
2018-05-05
Molecular dynamics (MD) simulations provide critical insights into many biological mechanisms. Programs such as VMD, Chimera, and PyMOL can produce impressive simulation visualizations, but they lack many advanced rendering algorithms common in the film and video-game industries. In contrast, the modeling program Blender includes such algorithms but cannot import MD-simulation data. MD trajectories often require many gigabytes of memory/disk space, complicating Blender import. We present Pyrite, a Blender plugin that overcomes these limitations. Pyrite allows researchers to visualize MD simulations within Blender, with full access to Blender's cutting-edge rendering techniques. We expect Pyrite-generated images to appeal to students and non-specialists alike. A copy of the plugin is available at http://durrantlab.com/pyrite/, released under the terms of the GNU General Public License Version 3. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
WebGL-enabled 3D visualization of a Solar Flare Simulation
NASA Astrophysics Data System (ADS)
Chen, A.; Cheung, C. M. M.; Chintzoglou, G.
2016-12-01
The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen
1997-01-01
In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.
NASA Astrophysics Data System (ADS)
Yoon, Jayoung; Kim, Gerard J.
2003-04-01
Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.
Chrono: A Parallel Physics Library for Rigid-Body, Flexible-Body, and Fluid Dynamics
2013-08-01
big data. Chrono::Render is capable of using 320 cores and is built around Pixar’s RenderMan. All these components combine to produce Chrono, a multi...rather small collection of rigid and/or deformable bodies of complex geometry (hourglass wall, wheel, track shoe, excava- tor blade, dipper ), and a...motivated by the scope of arbitrary data sets and the potentially immense scene complexity that results from big data; REYES, the underlying architecture
12. Photograph of a photograph in possession of Rock Island ...
12. Photograph of a photograph in possession of Rock Island Arsenal Historical Office. BIRD'S-EYE RENDERING; LOOKING SW. TNT BUILDING (SEE HAER NO. IL-20V) IS SHOWN AT THE UPPER LEFT, ATTACHED BY OVERHEAD PASSAGEWAYS TO THE BUILDING'S SOUTH ELEVATION. RENDERING PREPARED BY WESTINGHOUSE-CHURCH-KERR COMPANY OF NEW YORK. DATED APRIL 18, 1917. - Rock Island Arsenal, Building No. 250, Gillespie Avenue between Ramsey Street & South Avenue, Rock Island, Rock Island County, IL
3D surface rendered MR images of the brain and its vasculature.
Cline, H E; Lorensen, W E; Souza, S P; Jolesz, F A; Kikinis, R; Gerig, G; Kennedy, T E
1991-01-01
Both time-of-flight and phase contrast magnetic resonance angiography images are combined with stationary tissue images to provide data depicting two contrast relationships yielding intrinsic discrimination of brain matter and flowing blood. A computer analysis is based on nearest neighbor segmentation and the connection between anatomical structures to partition the images into different tissue categories: from which, high resolution brain parenchymal and vascular surfaces are constructed and rendered in juxtaposition, aiding in surgical planning.
Baughman, Richard J.; Ginley, David S.
1984-01-01
A surface prone to corrosion in corrosive environments is rendered anticorrosive by CVD growing a thin continuous film, e.g., having no detectable pinholes, thereon, of boron phosphide. In one embodiment, the film is semiconductive. In another aspect, the invention is an improved photoanode, and/or photoelectrochemical cell with a photoanode having a thin film of boron phosphide thereon rendering it anitcorrosive, and providing it with unexpectedly improved photoresponsive properties.
2006-03-01
factors that “maximize the benefit of location to the firm” ( Heizer & Render , 2004:302-307). In the book, Facility Location: Applications and Theory...Fylstra, D., Lasdon, L., Watson, J. and Waren, A. “Design and Use of the Microsoft Excel Solver,” Interfaces, 28(5):29-55, 1998. Heizer , Jay...and Render , Barry. Principles of Operations Management (5th ed.). New Jersey: Pearson Education Inc., 2004. Hofstra University. (n.d.). Von
Whole high-quality light environment for humans and plants.
Sharakshane, Anton
2017-11-01
Plants sharing a single light environment on a spaceship with a human being and bearing a decorative function should look as natural and attractive as possible. And consequently they can be illuminated only with white light with a high color rendering index. Can lighting optimized for a human eye be effective and appropriate for plants? Spectrum-based effects have been compared under artificial lighting of plants by high-pressure sodium lamps and general-purpose white LEDs. It has been shown that for the survey sample phytochrome photo-equilibria does not depend significantly on the parameters of white LED light, while the share of phytoactive blue light grows significantly as the color temperature increases. It has been revealed that yield photon flux is proportional to luminous efficacy and increases as the color temperature decreases, general color rendering index R a and the special color rendering index R 14 (green leaf) increase. General-purpose white LED lamps with a color temperature of 2700 K, R a > 90 and luminous efficacy of 100 lm/W are as efficient as the best high-pressure sodium lamps, and at a higher luminous efficacy their yield photon flux per joule is even bigger in proportion. Here we show that demand for high color rendering white LED light is not contradictory to the agro-technical objectives. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.
2017-10-01
The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.
Tiled vector data model for the geographical features of symbolized maps.
Li, Lin; Hu, Wei; Zhu, Haihong; Li, You; Zhang, Hang
2017-01-01
Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and 'addition' (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.
Research on Visualization of Ground Laser Radar Data Based on Osg
NASA Astrophysics Data System (ADS)
Huang, H.; Hu, C.; Zhang, F.; Xue, H.
2018-04-01
Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
NASA Astrophysics Data System (ADS)
Leuschner, F. W.; Van Der Westhuyzen, J. G. J.
2014-06-01
The technology for the measurement of colour rendering and colour quality is not new, but many parameters related to this issue are currently changing. A number of standard methods were developed and are used by different specialty areas of the lighting industry. CIE 13.3 has been the accepted standard implemented by many users and used for many years. Light-emitting Diode (LED) technology moves at a rapid pace and, as this lighting source finds wider acceptance, it appears that traditional colour-rendering measurement methods produce inconsistent results. Practical application of various types of LEDs yielded results that challenged conventional thinking regarding colour measurement of light sources. Recent studies have shown that the anatomy and physiology of the human eye is more complex than formerly accepted. Therefore, the development of updated measurement methodology also forces a fresh look at functioning and colour perception of the human eye, especially with regard to LEDs. This paper includes a short description of the history and need for the measurement of colour rendering. Some of the traditional measurement methods are presented and inadequacies are discussed. The latest discoveries regarding the functioning of the human eye and the perception of colour, especially when LEDs are used as light sources, are discussed. The unique properties of LEDs when used in practical applications such as luminaires are highlighted.
A Novel Approach to Visualizing Dark Matter Simulations.
Kaehler, R; Hahn, O; Abel, T
2012-12-01
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
Rendering plant emissions of volatile organic compounds during sterilization and cooking processes.
Bhatti, Z A; Maqbool, F; Langenhove, H V
2014-01-01
The rendering process emits odorous volatile compounds in the atmosphere; if these volatile organic compounds (VOCs) are not handled properly they can cause a serious environmental problem. During this process not all emitted compounds are odorous and hazardous but some of them have been found associated with health problems. Samples were collected in the plastic bags from the Arnout rendering plant. In this study, VOCs emission from two different processes (cooking and sterilization) was compared. For the analysis of various emitted compounds, gas chromatograph and mass spectrophotometer were used. A sterilization process was added in the rendering plant to inactivate the prion protein from meat bone meal prepared during the rendering process. The identification of mass spectrum was performed by using a mass spectral database system. The most odorous classes of compounds identified were aliphatic hydrocarbons (HCs) (29.24%), furans (28.74%), aromatic HCs (18.32%), most important sulphur-containing compounds (12.15%), aldehyde (10.91%) and ketones (0.60%). Emissions released during cooking and sterilization were 32.73 x 10(2) and 36.85 x 10(2) mg m(-3), respectively. In this study, it was observed that after the addition of the sterilization process VOCs' emissions were increased. A total of 87 mg m(-3) dimethyl disulphide (DMS) was detected only during the cooking process, whereas dimethly trisulphide (DMTS) was detected in both cooking (300 mg m(-3)) and sterilization (301 mg m(-3)) processes. About 11 mg m3 of DMS was detected during the cooking process, which was a small concentration compared with 299 mg m(-3) found during the sterilization process. At high temperature and pressure, DMTS and DMS were released more than any other sulphur-containing compounds. A condenser was applied to control the combined emission and it was successful in the reduction of VOCs to 22.83 x 10(2) mg m(-3) (67% reduction).
Fowler, Daniel W; Copier, John; Dalgleish, Angus G; Bodman-Smith, Mark D
2017-09-01
Vδ2 + T cells are a subpopulation of γδ T cells in humans that are cytotoxic towards cells which accumulate isopentenyl pyrophosphate. The nitrogen-containing bisphosphonate, zoledronic acid (ZA), can induce tumour cell lines to accumulate isopentenyl pyrophosphate, thus rendering them more susceptible to Vδ2 + T cell cytotoxicity. However, little is known about whether ZA renders other, non-malignant cell types susceptible. In this study we focussed on macrophages (Mϕs), as these cells have been shown to take up ZA. We differentiated peripheral blood monocytes from healthy donors into Mϕs and then treated them with IFN-γ or IL-4 to generate M1 and M2 Mϕs, respectively. We characterised these Mϕs based on their phenotype and cytokine production and then tested whether ZA rendered them susceptible to Vδ2 + T cell cytotoxicity. Consistent with the literature, IFN-γ-treated Mϕs expressed higher levels of the M1 markers CD64 and IL-12p70, whereas IL-4-treated Mϕs expressed higher levels of the M2 markers CD206 and chemokine (C-C motif) ligand 18. When treated with ZA, both M1 and M2 Mϕs became susceptible to Vδ2 + T cell cytotoxicity. Vδ2 + T cells expressed perforin and degranulated in response to ZA-treated Mϕs as shown by mobilisation of CD107a and CD107b to the cell surface. Furthermore, cytotoxicity towards ZA-treated Mϕs was sensitive-at least in part-to the perforin inhibitor concanamycin A. These findings suggest that ZA can render M1 and M2 Mϕs susceptible to Vδ2 + T cell cytotoxicity in a perforin-dependent manner, which has important implications regarding the use of ZA in cancer immunotherapy.
LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance
NASA Astrophysics Data System (ADS)
Ellul, C.; Altenbuchner, J.
2013-09-01
The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.
A New Approach to the Visual Rendering of Mantle Tomography
NASA Astrophysics Data System (ADS)
Holtzman, B. K.; Pratt, M. J.; Turk, M.; Hannasch, D. A.
2016-12-01
Visualization of mantle tomographic models requires a range of subjective aesthetic decisions that are often made subconsciously or unarticulated by authors. Many of these decisions affect the interpretations of the model, and therefore should be articulated and understood. In 2D these decisions are manifest in the choice of colormap, including the data values associated with the neutral/transitional colorband, as well as the correspondence between the extrema in the colormap and the parameters of the extrema. For example, we generally choose warm color signifying slow- and cool colors signifying fast velocities (or perturbations), but where is the transition, and the color gradients from transition to extrema? In 3D, volumes are generally rendered by choosing an isosurface of a velocity perturbation (relative to a model at each depth) and coloring it slow to fast. The choice of isosurface is arbitrary or guided by a researcher's intuition, again strongly affecting (or driven by) the interpretation. Here, we present a different approach to 3-D rendering of tomography models, using true volumetric rendering with "yt", a python package for visualization and analysis of data. In our approach, we do not use isosurfaces; instead, we render the extrema in the tomographic model as the most opaque, with an opacity function that touches zero (totally transparent) at dynamically selected values, or at the average value at each depth. The intent is that the most robust aspects of the model are visually clear, and the visualization emphasizes the nature of the interfaces between regions as well as the form of distinct mantle regions. Much of the current scientific discussion in upper mantle tomography focuses on the nature of interfaces, so we will demonstrate how decisions in the definition of the transparent regions influence interpretation of tomographic models. Our aim is to develop a visual language for tomographic visualization that can help focus geodynamic questions.
Roughness based perceptual analysis towards digital skin imaging system with haptic feedback.
Kim, K
2016-08-01
To examine psoriasis or atopic eczema, analyzing skin roughness by palpation is essential to precisely diagnose skin diseases. However, optical sensor based skin imaging systems do not allow dermatologists to touch skin images. To solve the problem, a new haptic rendering technology that can accurately display skin roughness must be developed. In addition, the rendering algorithm must be able to filter spatial noises created during 2D to 3D image conversion without losing the original roughness on the skin image. In this study, a perceptual way to design a noise filter that will remove spatial noises and in the meantime recover maximized roughness is introduced by understanding human sensitivity on surface roughness. A visuohaptic rendering system that can provide a user with seeing and touching digital skin surface roughness has been developed including a geometric roughness estimation method from a meshed surface. In following, a psychophysical experiment was designed and conducted with 12 human subjects to measure human perception with the developed visual and haptic interfaces to examine surface roughness. From the psychophysical experiment, it was found that touch is more sensitive at lower surface roughness, and vice versa. Human perception with both senses, vision and touch, becomes less sensitive to surface distortions as roughness increases. When interact with both channels, visual and haptic interfaces, the performance to detect abnormalities on roughness is greatly improved by sensory integration with the developed visuohaptic rendering system. The result can be used as a guideline to design a noise filter that can perceptually remove spatial noises while recover maximized roughness values from a digital skin image obtained by optical sensors. In addition, the result also confirms that the developed visuohaptic rendering system can help dermatologists or skin care professionals examine skin conditions by using vision and touch at the same time. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T.; Momose, T.; Oku, S.
It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less
The use of oak chips and coconut fiber as biofilter media to remove vocs in rendering process.
Tymczyna, Leszek; Chmielowiec-Korzeniowska, Anna; Paluszak, Zbigniew; Dobrowolska, Magadalena; Banach, Marcin; Pulit, Jolanta
2013-01-01
The study evaluated the effectiveness of air biofiltration in rendering plants. The biofilter material comprised compost soil (40%) and peat (40%) mixed up with coconut fiber (medium A) and oak bark (medium B). During biofiltration average VOCs reduction reached 88.4% for medium A and 89.7% for medium B. A positive relationship of aldehyde reduction from material humidity (r = 0.502; α<0.05) was also noted. Other biomaterial parameters did not affect the treatment efficiency.
The Visible Human Project: From Body to Bits.
Ackerman, Michael J
2017-01-01
Atlases of anatomy have long been a mainstay for visualizing and identifying features of the human body [1]. Many are constructed of idealized illustrations rendered so that structures are presented as three-dimensional (3-D) pictures. Others have employed photographs of actual dissections. Still others are composed of collections of artist renderings of organs or areas of interest. All rely on a basically two-dimensional (2-D) graphic display to depict and allow for a better understanding of a complicated 3-D structure.
Estimation of ovular fiber production in cotton
Van't Hof, Jack
1998-09-01
The present invention is a method for rendering cotton fiber cells that are post-anthesis and pre-harvest available for analysis of their physical properties. The method includes the steps of hydrolyzing cotton fiber cells and separating cotton fiber cells from cotton ovules thereby rendering the cells available for analysis. The analysis of the fiber cells is through any suitable means, e.g., visual inspection. Visual inspection of the cells can be accomplished by placing the cells under an instrument for detection, such as microscope or other means.
Operating System Support for Mobile Interactive Applications
2002-08-01
Buckingham Palace (inte- rior) Scene Number of polygons Taj Mahal 127406 Café 138598 Notre Dame 160206 Buckingham Palace 235572...d em an d (m ill io ns o f c yc le s) ª Number of polygons rendered Taj Mahal Café Notre Dame Buckingham Palace (a) Random camera position 0 100 200...Notre Dame Buckingham Palace (b) Fixed camera position The « -axis is the number of polygons rendered, i.e. ¬G where ¬ is the original model size
A Genome-Wide Knockout Screen to Identify Genes Involved in Acquired Carboplatin Resistance
2016-07-01
library screen to identify genes that when knocked out render human ovarian cells > 2.5-fold resistant to CBDCA; 2) Validate the ability of...a GeCKOv2 library screen to identify genes that when knocked out render human ovarian cells > 2.5-fold resistant to CBDCA; 2) validate the ability of...resistance in either cell lines or clinical samples. The CRIPSR-cas9 technology now provides us with a major new tool to introduce knock out mutations
Archeological Investigations in Cochiti Reservoir, New Mexico. Volume 3. 1976-1977 Field Seasons.
1979-01-01
or methods are in a constant state of flux, and will undoubtedly continue so. The present In 1959, Baumhoff and Heizer suggested that the sys- paper...marrow extraction and when as estimates rather than counts were insect bodies and rendering bone grease. parts (10-25%), cocoons/larvae/eggs (1-10%), and...A yielded rendering bone grease or making soup. The association of 40 burned bone fragments. A 500 ml sample from grid the unidentified fragments and
1997-03-01
these historic resources, rendering them the least preferable alternatives with respect to cultural resources. 2.3.2.4 Visual Resources 1 Construction of...communication). Others measures, however, were interrupted by the decision in 1995 to close the base, an action that rendered many mitigation measures unnecessary...of North American Indians, Vol. 8 (California), pp. 485495. Edited by R. F. Heizer . Smithsonian Institute, Washington, DC. Lienkaemper, J.J. 1992
1998-04-01
Valley (Kroeber & Heizer 1970). In 1972, the Bureau of Indian Affairs listed only 11 individuals claiming Patwin ancestry in the entire territory...facility from the dredge disposal area to the upland open space scenic resource area would render this facility visible from viewpoints with . high...take. The COE probably would not issue a permit unless the USFWS rendered a "non-jeopardy" Biological Opinion, which would incorporate mitigations for
1982-09-01
frequently awkward verbage thus rendering the report more readable. Richard Walling produced the figures and made many constructive coImnts on the...the Cobbs Swamp complex (Chase 1978), had developed into the Render - son complex (Dickens 1971). By approximately A.D. 400, check and simple j...Methods in Archaeology, edited by Robert F. Heizer and Sherburne F. Cook, pp. 60-92. Viking Fund Publications in Anthropology 28. Chicago. Stephenson
A parallel coordinates style interface for exploratory volume visualization.
Tory, Melanie; Potts, Simeon; Möller, Torsten
2005-01-01
We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.
Volumetric Visualization of Human Skin
NASA Astrophysics Data System (ADS)
Kawai, Toshiyuki; Kurioka, Yoshihiro
We propose a modeling and rendering technique of human skin, which can provide realistic color, gloss and translucency for various applications in computer graphics. Our method is based on volumetric representation of the structure inside of the skin. Our model consists of the stratum corneum and three layers of pigments. The stratum corneum has also layered structure in which the incident light is reflected, refracted and diffused. Each layer of pigment has carotene, melanin or hemoglobin. The density distributions of pigments which define the color of each layer can be supplied as one of the voxel values. Surface normals of upper-side voxels are fluctuated to produce bumps and lines on the skin. We apply ray tracing approach to this model to obtain the rendered image. Multiple scattering in the stratum corneum, reflective and absorptive spectrum of pigments are considered. We also consider Fresnel term to calculate the specular component for glossy surface of skin. Some examples of rendered images are shown, which can successfully visualize a human skin.
3D chromosome rendering from Hi-C data using virtual reality
NASA Astrophysics Data System (ADS)
Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing
2015-01-01
Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.
Toward the light field display: autostereoscopic rendering via a cluster of projectors.
Yang, Ruigang; Huang, Xinyu; Li, Sifang; Jaynes, Christopher
2008-01-01
Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.
NASA Astrophysics Data System (ADS)
Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena
2014-03-01
In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.
Query-Driven Visualization and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Bethel, E. Wes; Prabhat, Mr.
2012-11-01
This report focuses on an approach to high performance visualization and analysis, termed query-driven visualization and analysis (QDV). QDV aims to reduce the amount of data that needs to be processed by the visualization, analysis, and rendering pipelines. The goal of the data reduction process is to separate out data that is "scientifically interesting'' and to focus visualization, analysis, and rendering on that interesting subset. The premise is that for any given visualization or analysis task, the data subset of interest is much smaller than the larger, complete data set. This strategy---extracting smaller data subsets of interest and focusing ofmore » the visualization processing on these subsets---is complementary to the approach of increasing the capacity of the visualization, analysis, and rendering pipelines through parallelism. This report discusses the fundamental concepts in QDV, their relationship to different stages in the visualization and analysis pipelines, and presents QDV's application to problems in diverse areas, ranging from forensic cybersecurity to high energy physics.« less
INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF.
Hershfield, Hal E; Goldstein, Daniel G; Sharpe, William F; Fox, Jesse; Yeykelis, Leo; Carstensen, Laura L; Bailenson, Jeremy N
2011-11-01
Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones.
Real-time range generation for ladar hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Olson, Eric M.; Coker, Charles F.
1996-05-01
Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.
Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.
von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar
2017-09-01
Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.
Rendering of HDR content on LDR displays: an objective approach
NASA Astrophysics Data System (ADS)
Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick
2015-09-01
Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.
Complex adaptation-based LDR image rendering for 3D image reconstruction
NASA Astrophysics Data System (ADS)
Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik
2014-07-01
A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.
Dean, Jessica; Mahar, Patrick; Loh, Erwin; Ludlow, Karinne
2013-10-01
Medical practitioners may have their particular skills called upon outside a direct professional context. The responsibilities of medical practitioners outside their defined scope of clinical practice may not be clear to all clinicians. To consider the possible legal consequences of a doctor refusing to assist a person in need of urgent medical attention both in terms of medical negligence and professional misconduct. Where an established clinical relationship does not exist, and a doctor does not wish to render aid, three particular scenarios may arise. A doctor may actively deny being a doctor, passively avoid identifying themselves as a doctor or acknowledge being a doctor, but refuse to render assistance. Aside from any ethical issues, how a doctor chooses to act and represent themselves may lead to different legal ramifications. There exists significant variation in state provisions relating to legal obligations to render aid, which may benefit from review and revision at a national level.
A new framework for interactive quality assessment with application to light field coding
NASA Astrophysics Data System (ADS)
Viola, Irene; Ebrahimi, Touradj
2017-09-01
In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.
Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy
2014-04-01
This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.
2002-01-01
Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.
Quality improving techniques for free-viewpoint DIBR
NASA Astrophysics Data System (ADS)
Do, Luat; Zinger, Sveta; de With, Peter H. N.
2010-02-01
Interactive free-viewpoint selection applied to a 3D multi-view signal is a possible attractive feature of the rapidly developing 3D TV media. This paper explores a new rendering algorithm that computes a free-viewpoint based on depth image warping between two reference views from existing cameras. We have developed three quality enhancing techniques that specifically aim at solving the major artifacts. First, resampling artifacts are filled in by a combination of median filtering and inverse warping. Second, contour artifacts are processed while omitting warping of edges at high discontinuities. Third, we employ a depth signal for more accurate disocclusion inpainting. We obtain an average PSNR gain of 3 dB and 4.5 dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared to recently published results. While experimenting with synthetic data, we observe that the rendering quality is highly dependent on the complexity of the scene. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by the rendering quality and not by coding.
State of the "art": a taxonomy of artistic stylization techniques for images and video.
Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias
2013-05-01
This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
NASA Astrophysics Data System (ADS)
Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.
2005-03-01
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
Impact of a in situ laboratory on physician expectancy.
Brulé, Romain; Sarazin, Marianne; Tayeb, Nicole; Roubille, Martine; Szymanowicz, Anton
2018-01-01
Biological examinations are essential for clinicians' medical care. The aim of this study is to assess clinicians' expectations in healthcare facilities and their perception of medical biology in different types of organization. We performed a prospective transversal study by electronic questionnaire conducted among 242 practitioners in four healthcare facilities. The aspects explored were as follows: quality, reliability, rendering time of examination results and biology platform support. Analyses were conducted after rectification of the sample by weight. Sixty one clinicians responded (25.2% [19.7-30.7]). The rendering time of examination is the main criterion mentioned with a requirement of less than one hour in case of emergency (81.5% [71.8-91.2] of the answers) to less than 72 hours for specialized examinations (81.5% [71.8-91.2] of the answers). Better collaboration with biologists is expected by clinicians (54.7% [50.9-58.5]). Satisfaction with the biology platform support and rendering time of emergency cases results was significantly (p <0.005) lower in facilities without an on-site laboratory. In conclusion, although medical biology performance is generally satisfactory within medical facilities, it remains nonetheless affected when the laboratory is not on site. The rendering time of examination, depending on the biology platform support functions and the proximity of the laboratory, remains the main criterion. Clinician-biologist collaboration, which increases of the medico-economic efficiency of patient's healthcare, appears as an essential criterion in a structural conception of medical biology.
Unconscious neural processing differs with method used to render stimuli invisible
Fogelson, Sergey V.; Kohler, Peter J.; Miller, Kevin J.; Granger, Richard; Tse, Peter U.
2014-01-01
Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness. PMID:24982647
Unconscious neural processing differs with method used to render stimuli invisible.
Fogelson, Sergey V; Kohler, Peter J; Miller, Kevin J; Granger, Richard; Tse, Peter U
2014-01-01
Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.
Flies and their bacterial loads in greyhound dog kennels in Kansas.
Urban, J E; Broce, A
1998-03-01
Breeders of greyhound dogs traditionally feed racing animals and nursing bitches raw meat, and that meat generally is obtained frozen from commercial renderers. Previous studies have shown that the rendered meat is frequently contaminated with enteric bacteria, including Salmonella spp., and that during thawing the rendered meat is exposed to filth flies common in dog kennels. Nursing greyhound pups tend to experience a high morbidity and mortality from intestinal infections, and we attempted to determine in this study whether enterics could be spread to pups through contaminated flies. At intervals during 1995 and 1996, flies were trapped or were net-collected from 10 dog breeding kennels in the region around Abilene, KS. Trapped flies were identified and counted to determine population numbers, and netted flies were cultured in tetrathionate broth and streaked to medium selecting for Salmonella sp. and other lactose-negative Gram (-) bacteria. The relative numbers of different fly species varied with the sampling method, but traps and sweep nets produced similar proportions of the different fly species. Blow flies were twice as likely to be contaminated with enteric bacteria as any other fly. The most common enteric bacteria found were Proteus spp., followed by Providencia spp., Pseudomonas spp., and Salmonella spp. The incidence of Salmonella and Proteus spp. seemed to correlate more with accessibility of flies to dog excrement than to rendered meat. The apparent high incidence of enteric contamination of filth flies clearly implicates them as vectors of enteric diseases in kennels.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Evaluation of a hyperspectral image database for demosaicking purposes
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Süsstrunk, Sabine
2011-01-01
We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.
SEPARATION OF URANIUM HEXAFLUORIDE FROM ORGANIC FLUORO COMPOUNDS
Libby, W.F.
1958-10-01
A method is presented for removing perfiuoroorganic compounds such as C/ sub 7/F/sub 16/ from UF/sub 6/. The physical and chemical properties of the perfluoro compounds are such as to render their removal from UF/sub 6/ difficulty by conventional techniques. The mixture containing UF/sub 6/ and the perfluoro compounds is pyrolyzed in an inert container at high temperature and pressure. The properties of the products obtained by pyrolysis differ from the properties of UF/sub 6/ to a sufficient degree to render their separation possible by ordinary methods.
A Study on AR 3D Objects Shading Method Using Electronic Compass Sensor
NASA Astrophysics Data System (ADS)
Jung, Sungmo; Kim, Seoksoo
More effective communications can be offered to users by applying NPR (Non-Photorealistic Rendering) methods to 3D graphics. Thus, there has been much research on how to apply NPR to mobile contents. However, previous studies only propose cartoon rendering for pre-treatment with no consideration for directions of light in the surrounding environment. In this study, therefore, ECS(Electronic Compass Sensor) is applied to AR 3D objects shading in order to define directions of light as per time slots for assimilation with the surrounding environment.
Estimation of ovular fiber production in cotton
Van`t Hof, J.
1998-09-01
The present invention is a method for rendering cotton fiber cells that are post-anthesis and pre-harvest available for analysis of their physical properties. The method includes the steps of hydrolyzing cotton fiber cells and separating cotton fiber cells from cotton ovules thereby rendering the cells available for analysis. The analysis of the fiber cells is through any suitable means, e.g., visual inspection. Visual inspection of the cells can be accomplished by placing the cells under an instrument for detection, such as microscope or other means. 4 figs.
Estimation of ovular fiber production in cotton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van`t Hof, J.
The present invention is a method for rendering cotton fiber cells that are post-anthesis and pre-harvest available for analysis of their physical properties. The method includes the steps of hydrolyzing cotton fiber cells and separating cotton fiber cells from cotton ovules thereby rendering the cells available for analysis. The analysis of the fiber cells is through any suitable means, e.g., visual inspection. Visual inspection of the cells can be accomplished by placing the cells under an instrument for detection, such as microscope or other means. 4 figs.
Physics-Based Stimulation for Night Vision Goggle Simulation
2006-11-01
a CRT display system can produce darker black level than displays based on digital light processing (DLP) or liquid crystal technologies. It should...The general form of the bucket equation for any gun (color) is as follows: (3) n n n n r MnRp f MxR MnR ⎛ ⎞− = ⎜ ⎟−⎝ ⎠ Equation 3 General...simulate rendering approach, we began by testing the bucket rendering approach already utilized by SensorHost: (10) n n n n r MnRp f MxR MnR
Multilayer coatings on glass for painting protection and optimized color rendering
NASA Astrophysics Data System (ADS)
Piegari, Angela; Polato, Pietro
2002-06-01
Optical coatings offer a solution to the problem of damage to paintings, caused by ultraviolet and infrared radiation, by cutting radiation wavelengths outside the visible range. Simultaneously, these coatings can enhance an observer's viewing of the paintings by reducing the reflections from ordinary glass panes. All these functions should be performed by the same coating. The design of such a coating, as well as the evaluation of existing products, requires the definition of an appropriate merit function in which coating absorption, high transparency, and color rendering are combined.
Rendering Protein-Based Particles Transiently Insoluble for Therapeutic Applications
Xu, Jing; Wang, Jin; Luft, J. Christopher; Tian, Shaomin; Owens, Gary; Pandya, Ashish A.; Berglund, Peter; Pohlhaus, Patrick; Maynor, Benjamin W.; Napier, Mary E.; DeSimone, Joseph M.
2012-01-01
Herein we report the fabrication of protein (bovine serum albumin, BSA) particles which were rendered transiently insoluble using a novel, reductively labile disulfide-based cross-linker. After being cross-linked, the protein particles retain their integrity in aqueous solution and dissolve preferentially under a reducing environment. Our data demonstrates that cleavage of the cross-linker leaves no chemical residue on the reactive amino group. Delivery of a self-replicating RNA was achieved via the transiently insoluble PRINT protein particles. These protein particles can provide new opportunities for drug and gene delivery. PMID:22568387
1984-08-10
whites do much longer persist in travelling through that part of their territory (Arkansas River area), and thereby render it in a great measure almost...employees among 2,200 miners, but the latter certainly predominated. Another 1,931 laborers I Page 105 rendered personal services as did 357 domestic...such sources (Dobyns and Euler 1970; Heizer and Kroeber 1976: Shipek 1968, 1982). On site visitation with tribal representatives is viewed as a valid
A Three-Dimensional Virtual Simulator for Aircraft Flyover Presentation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Sullivan, Brenda M.; Sandridge, Christopher A.
2003-01-01
This paper presents a system developed at NASA Langley Research Center to render aircraft flyovers in a virtual reality environment. The present system uses monaural recordings of actual aircraft flyover noise and presents these binaurally using head tracking information. The three-dimensional audio is simultaneously rendered with a visual presentation using a head-mounted display (HMD). The final system will use flyover noise synthesized using data from various analytical and empirical modeling systems. This will permit presentation of flyover noise from candidate low-noise flight operations to subjects for psychoacoustical evaluation.
Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan
2013-09-01
Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can accurately measure the volume in pleural effusion patients, and a linear regression equation can be used to estimate the volume of the free pleural effusion.
Expressive map design: OGC SLD/SE++ extension for expressive map styles
NASA Astrophysics Data System (ADS)
Christophe, Sidonie; Duménieu, Bertrand; Masse, Antoine; Hoarau, Charlotte; Ory, Jérémie; Brédif, Mathieu; Lecordix, François; Mellado, Nicolas; Turbet, Jérémie; Loi, Hugo; Hurtut, Thomas; Vanderhaeghe, David; Vergne, Romain; Thollot, Joëlle
2018-05-01
In the context of custom map design, handling more artistic and expressive tools has been identified as a carto-graphic need, in order to design stylized and expressive maps. Based on previous works on style formalization, an approach for specifying the map style has been proposed and experimented for particular use cases. A first step deals with the analysis of inspiration sources, in order to extract `what does make the style of the source', i.e. the salient visual characteristics to be automatically reproduced (textures, spatial arrangements, linear stylization, etc.). In a second step, in order to mimic and generate those visual characteristics, existing and innovative rendering techniques have been implemented in our GIS engine, thus extending the capabilities to generate expressive renderings. Therefore, an extension of the existing cartographic pipeline has been proposed based on the following aspects: 1- extension of the symbolization specifications OGC SLD/SE in order to provide a formalism to specify and reference expressive rendering methods; 2- separate the specification of each rendering method and its parameterization, as metadata. The main contribution has been described in (Christophe et al. 2016). In this paper, we focus firstly on the extension of the cartographic pipeline (SLD++ and metadata) and secondly on map design capabilities which have been experimented on various topographic styles: old cartographic styles (Cassini), artistic styles (watercolor, impressionism, Japanese print), hybrid topographic styles (ortho-imagery & vector data) and finally abstract and photo-realist styles for the geovisualization of costal area. The genericity and interoperability of our approach are promising and have already been tested for 3D visualization.
NASA Astrophysics Data System (ADS)
Joshi, Rajan L.
2006-03-01
In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.
Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device
NASA Astrophysics Data System (ADS)
Färber, Matthias; Heller, Julika; Handels, Heinz
2007-03-01
The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.
NASA Astrophysics Data System (ADS)
Ryckaert, Jana; Correia, António; Smet, Kevin; Tessier, Mickael D.; Dupont, Dorian; Hens, Zeger; Hanselaer, Peter; Meuret, Youri
2017-09-01
Combining traditional phosphors with a broad emission spectrum and non-scattering quantum dots with a narrow emission spectrum can have multiple advantages for white LEDs. It allows to reduce the amount of scattering in the wavelength conversion element, increasing the efficiency of the complete system. Furthermore, the unique possibility to tune the emission spectrum of quantum dots allows to optimize the resulting LED spectrum in order to achieve optimal color rendering properties for the light source. However, finding the optimal quantum dot properties to achieve optimal efficacy and color rendering is a non-trivial task. Instead of simply summing up the emission spectra of the blue LED, phosphor and quantum dots, we propose a complete simulation tool that allows an accurate analysis of the final performance for a range of different quantum dot synthesis parameters. The recycling of the reflected light from the wavelength conversion element by the LED package is taken into account, as well as the re-absorption and the associated red-shift. This simulation tool is used to vary two synthesis parameters (core size and cadmium fraction) of InP/CdxZn1-xSe quantum dots. We find general trends for the ideal quantum dot that should be combined with a specific YAG:Ce broad band phosphor to obtain optimal efficiency and color rendering for a white LED with a specific pumping LED and recycling cavity, with a desired CCT of 3500K.
The Louisiana State University waste-to-energy incinerator
NASA Astrophysics Data System (ADS)
1994-10-01
This proposed action is for cost-shared construction of an incinerator/steam-generation facility at Louisiana State University under the State Energy Conservation Program (SECP). The SECP, created by the Energy Policy and Conservation Act, calls upon DOE to encourage energy conservation, renewable energy, and energy efficiency by providing Federal technical and financial assistance in developing and implementing comprehensive state energy conservation plans and projects. Currently, LSU runs a campus-wide recycling program in order to reduce the quantity of solid waste requiring disposal. This program has removed recyclable paper from the waste stream; however, a considerable quantity of other non-recyclable combustible wastes are produced on campus. Until recently, these wastes were disposed of in the Devil's Swamp landfill (also known as the East Baton Rouge Parish landfill). When this facility reached its capacity, a new landfill was opened a short distance away, and this new site is now used for disposal of the University's non-recyclable wastes. While this new landfill has enough capacity to last for at least 20 years (from 1994), the University has identified the need for a more efficient and effective manner of waste disposal than landfilling. The University also has non-renderable biological and potentially infectious waste materials from the School of Veterinary Medicine and the Student Health Center, primarily the former, whose wastes include animal carcasses and bedding materials. Renderable animal wastes from the School of Veterinary Medicine are sent to a rendering plant. Non-renderable, non-infectious animal wastes currently are disposed of in an existing on-campus incinerator near the School of Veterinary Medicine building.
Michael Sukop,; Cunningham, Kevin J.
2014-01-01
Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s−1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.
Interactive Web-based Visualization of Atomic Position-time Series Data
NASA Astrophysics Data System (ADS)
Thapa, S.; Karki, B. B.
2017-12-01
Extracting and interpreting the information contained in large sets of time-varying three dimensional positional data for the constituent atoms of simulated material is a challenging task. We have recently implemented a web-based visualization system to analyze the position-time series data extracted from the local or remote hosts. It involves a pre-processing step for data reduction, which involves skipping uninteresting parts of the data uniformly (at full atomic configuration level) or non-uniformly (at atomic species level or individual atom level). Atomic configuration snapshot is rendered using the ball-stick representation and can be animated by rendering successive configurations. The entire atomic dynamics can be captured as the trajectories by rendering the atomic positions at all time steps together as points. The trajectories can be manipulated at both species and atomic levels so that we can focus on one or more trajectories of interest, and can be also superimposed with the instantaneous atomic structure. The implementation was done using WebGL and Three.js for graphical rendering, HTML5 and Javascript for GUI, and Elasticsearch and JSON for data storage and retrieval within the Grails Framework. We have applied our visualization system to the simulation datatsets for proton-bearing forsterite (Mg2SiO4) - an abundant mineral of Earths upper mantle. Visualization reveals that protons (hydrogen ions) incorporated as interstitials are much more mobile than protons substituting the host Mg and Si cation sites. The proton diffusion appears to be anisotropic with high mobility along the x-direction, showing limited discrete jumps in other two directions.
NASA Astrophysics Data System (ADS)
Sukop, Michael C.; Cunningham, Kevin J.
2014-11-01
Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s-1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.
Acoustic-tactile rendering of visual information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.
2012-03-01
In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.
Mesophilic and thermophilic anaerobic co-digestion of rendering plant and slaughterhouse wastes.
Bayr, Suvi; Rantanen, Marianne; Kaparaju, Prasad; Rintala, Jukka
2012-01-01
Co-digestion of rendering and slaughterhouse wastes was studied in laboratory scale semi-continuously fed continuously stirred tank reactors (CSTRs) at 35 and 55 °C. All in all, 10 different rendering plant and slaughterhouse waste fractions were characterised showing high contents of lipids and proteins, and methane potentials of 262-572 dm(3)CH(4)/kg volatile solids(VS)(added). In mesophilic CSTR methane yields of ca 720 dm(3) CH(4)/kg VS(fed) were obtained with organic loading rates (OLR) of 1.0 and 1.5 kg VS/m(3) d, and hydraulic retention time (HRT) of 50 d. For thermophilic process, the lowest studied OLR of 1.5 kg VS/m(3) d, turned to be unstable after operation of 1.5 HRT, due to accumulating ammonia, volatile fatty acids (VFAs) and probably also long chain fatty acids (LCFAs). In conclusion, mesophilic process was found to be more feasible for co-digestion than thermophilic process, methane yields being higher and process more stable in mesophilic conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.
3D image display of fetal ultrasonic images by thin shell
NASA Astrophysics Data System (ADS)
Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen
1999-05-01
Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
A spatially augmented reality sketching interface for architectural daylighting design.
Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara
2011-01-01
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society
INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF
HERSHFIELD, HAL E.; GOLDSTEIN, DANIEL G.; SHARPE, WILLIAM F.; FOX, JESSE; YEYKELIS, LEO; CARSTENSEN, LAURA L.; BAILENSON, JEREMY N.
2014-01-01
Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones. PMID:24634544
Efficient visibility encoding for dynamic illumination in direct volume rendering.
Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas
2012-03-01
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.
A walk through the planned CS building. M.S. Thesis
NASA Technical Reports Server (NTRS)
Khorramabadi, Delnaz
1991-01-01
Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
High color rendering index WLED based on YAG:Ce phosphor and CdS/ZnS core/shell quantum dots
NASA Astrophysics Data System (ADS)
Shen, Changyu; Li, Ke
2009-08-01
White LED combining of blue chip and YAG:Ce phosphor suffers from a red spectral deficiency, resulting in a relatively low value of color rendering index (CRI). In our study, for an effort to improve color rendering properties of YAG:Ce phosphor-based white LEDs, highly luminescent red-orange emitting CdS/ZnS QDs were blended with YAG:Ce phosphors. Core/shell CdS/ZnS quantum dots with the emission wavelength of 618nm, was synthesized by thermal deposition using cadmium oxide and selenium as precursors in a hot lauric acid and hexadecylamine trioctylphosphine oxide hybrid. YAG:Ce phosphor was synthesized by high-temperature solid state reaction at 900-1200°C in a slightly reducing atmosphere for 4 hours. Blends of phosphors and QDs exhibited the prominent spectral evolution with an increasing content of QDs. A hybrid white LED, which combines a blue LED with the blend of YAG phosphor and QDs with a weight ratio of 1.5:1,was demonstrated with an improved CRI value of 86.
Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2011-05-01
Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.
Tangible display systems: direct interfaces for computer-based studies of surface appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.; Ferwerda, James A.
2010-02-01
When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.
Volumetric visualization algorithm development for an FPGA-based custom computing machine
NASA Astrophysics Data System (ADS)
Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim
1998-05-01
Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.
Evaluating Approaches to Rendering Braille Text on a High-Density Pin Display.
Morash, Valerie S; Russomanno, Alexander; Gillespie, R Brent; OModhrain, Sile
2017-10-13
Refreshable displays for tactile graphics are typically composed of pins that have smaller diameters and spacing than standard braille dots. We investigated configurations of high-density pins to form braille text on such displays using non-refreshable stimuli produced with a 3D printer. Normal dot braille (diameter 1.5 mm) was compared to high-density dot braille (diameter 0.75 mm) wherein each normal dot was rendered by high-density simulated pins alone or in a cluster of pins configured in a diamond, X, or square; and to "blobs" that could result from covering normal braille and high-density multi-pin configurations with a thin membrane. Twelve blind participants read MNREAD sentences displayed in these conditions. For high-density simulated pins, single pins were as quickly and easily read as normal braille, but diamond, X, and square multi-pin configurations were slower and/or harder to read than normal braille. We therefore conclude that as long as center-to-center dot spacing and dot placement is maintained, the dot diameter may be open to variability for rendering braille on a high density tactile display.
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
MovieMaker: a web server for rapid rendering of protein motions and interactions
Maiti, Rajarshi; Van Domselaar, Gary H.; Wishart, David S.
2005-01-01
MovieMaker is a web server that allows short (∼10 s), downloadable movies of protein motions to be generated. It accepts PDB files or PDB accession numbers as input and automatically calculates, renders and merges the necessary image files to create colourful animations covering a wide range of protein motions and other dynamic processes. Users have the option of animating (i) simple rotation, (ii) morphing between two end-state conformers, (iii) short-scale, picosecond vibrations, (iv) ligand docking, (v) protein oligomerization, (vi) mid-scale nanosecond (ensemble) motions and (vii) protein folding/unfolding. MovieMaker does not perform molecular dynamics calculations. Instead it is an animation tool that uses a sophisticated superpositioning algorithm in conjunction with Cartesian coordinate interpolation to rapidly and automatically calculate the intermediate structures needed for many of its animations. Users have extensive control over the rendering style, structure colour, animation quality, background and other image features. MovieMaker is intended to be a general-purpose server that allows both experts and non-experts to easily generate useful, informative protein animations for educational and illustrative purposes. MovieMaker is accessible at . PMID:15980488
New automatic mode of visualizing the colon via Cine CT
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.
2001-05-01
Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.
Rowe, Steven P; Zinreich, S James; Fishman, Elliot K
2018-06-01
Three-dimensional (3D) visualizations of volumetric data from CT have gained widespread clinical acceptance and are an important method for evaluating complex anatomy and pathology. Recently, cinematic rendering (CR), a new 3D visualization methodology, has become available. CR utilizes a lighting model that allows for the production of photorealistic images from isotropic voxel data. Given how new this technique is, studies to evaluate its clinical utility and any potential advantages or disadvantages relative to other 3D methods such as volume rendering have yet to be published. In this pictorial review, we provide examples of normal calvarial, maxillofacial, and skull base anatomy and pathological conditions that highlight the potential for CR images to aid in patient evaluation and treatment planning. The highly detailed images and nuanced shadowing that are intrinsic to CR are well suited to the display of the complex anatomy in this region of the body. We look forward to studies with CR that will ascertain the ultimate value of this methodology to evaluate calvarium, maxillofacial, and skull base morphology as well as other complex anatomic structures.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhu, Qing
2017-07-01
In order to achieve the simulation of elaborate stroke trajectories in Chinese calligraphy, this paper puts forward the innovative researching on writing momentum in the field of non-photorealistic rendering in the first time. Through the analysis of using pen in Chinese calligraphy, the writing momentum is divided into three parts: the center, the side and the back of writing brush by the judgment of the angle of brush holder. We design an algorithm for dynamic outputting writing rendering based on brush model. According to monitoring parameters such as the direction, position and normalized pressure of using pen, we calculate parameters like the footprint direction, the shape, size and nib bending after writing. The algorithm can also judge the dynamic writing trend of stroke trajectories, even automatic generate stroke trajectories by the algorithm forecasted. We achieve a more delicate rendering of Chinese calligraphy to enhance the user's operating results. And we finish the unique writing effect separated the Chinese calligraphy form other general writing results, which greatly enhances the Chinese calligraphy simulation. So that people who lack of writing skills can easily draw a beautiful charm font.
Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.
Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong
2006-04-01
This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.
Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd
2018-04-01
Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.
Standardized volume-rendering of contrast-enhanced renal magnetic resonance angiography.
Smedby, O; Oberg, R; Asberg, B; Stenström, H; Eriksson, P
2005-08-01
To propose a technique for standardizing volume-rendering technique (VRT) protocols and to compare this with maximum intensity projection (MIP) in regard to image quality and diagnostic confidence in stenosis diagnosis with magnetic resonance angiography (MRA). Twenty patients were examined with MRA under suspicion of renal artery stenosis. Using the histogram function in the volume-rendering software, the 95th and 99th percentiles of the 3D data set were identified and used to define the VRT transfer function. Two radiologists assessed the stenosis pathology and image quality from rotational sequences of MIP and VRT images. Good overall agreement (mean kappa=0.72) was found between MIP and VRT diagnoses. The agreement between MIP and VRT was considerably better than that between observers (mean kappa=0.43). One of the observers judged VRT images as having higher image quality than MIP images. Presenting renal MRA images with VRT gave results in good agreement with MIP. With VRT protocols defined from the histogram of the image, the lack of an absolute gray scale in MRI need not be a major problem.
[Rendering surgical care to wounded with neck wounds in an armed conflict].
Samokhvalov, I M; Zavrazhnov, A A; Fakhrutdinov, A M; Sychev, M I
2001-10-01
The results of rendering of the medical care (the first aid, qualified and specialized) obtained in 172 servicemen with neck injuries who stayed in Republic of Chechnya during the period from 09.08.1999 to 28.07.2000 were analyzed. Basing on the results of analysis and experience of casualties' treatment the authors discuss the problems of sequence and volume of surgical care in this group of casualties with reference to available medical evacuation system, surgical tactics at the stage of specialized care. They also consider the peculiarities of operative treatment of the casualties with neck injuries.
NASA Astrophysics Data System (ADS)
Liu, Zhi; Zhou, Baotong; Zhang, Changnian
2017-03-01
Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.
O'Modhrain, Sile; Giudice, Nicholas A; Gardner, John A; Legge, Gordon E
2015-01-01
This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on the effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, haptic, and multimodal methods of rendering maps, graphs, and models. The authors, all of whom are visually impaired researchers in this domain, present findings from their own work and work of many others who have contributed to the current understanding of how to prepare and render images for both hard-copy and technology-mediated presentation of Braille and tangible graphics.
Virtual reality for spherical images
NASA Astrophysics Data System (ADS)
Pilarczyk, Rafal; Skarbek, Władysław
2017-08-01
Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.
Astronomy Data Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-08-01
We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Method for making glass nonfogging
Lord, David E.; Carter, Gary W.; Petrini, Richard R.
1979-01-01
A method for rendering glass nonfogging (to condensation fog) by sandwiching the glass between two electrodes such that the glass functions as the dielectric of a capacitor, a large alternating current (AC) voltage is applied across the electrodes for a selected time period causing the glass to absorb a charge, and the electrodes are removed. The glass absorbs a charge from the electrodes rendering it nonfogging. The glass surface is undamaged by application of the AC voltage, and normal optical properties are unaffected. This method can be applied to optical surfaces such as lenses, auto windshields, mirrors, etc., wherever condensation fog on glass is a problem.
Achieving high CRI from warm to super white
NASA Astrophysics Data System (ADS)
Bailey, Edward; Tormey, Ellen S.
2007-09-01
Light sources which produce a high color rendering index (CRI) have many applications in the lighting industry today. High color rendering accents the rich color which abounds in nature, interior design, theatrical costumes and props, clothing and fabric, jewelry, and machine vision applications. Multi-wavelength LED sources can pump phosphors at multiple stokes shift emission regimes and when combined with selected direct emission sources can allow for greater flexibility in the production of warm-white and cool white light of specialty interest. Unique solutions to R8 and R14 CRI >95 at 2850K, 4750K, 5250K, and 6750K presented.
Tile-based Level of Detail for the Parallel Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niski, K; Cohen, J D
Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.
Warm white LEDs lighting over Ra=95 and its applications
NASA Astrophysics Data System (ADS)
Kobashi, Katsuya; Taguchi, Tsunemasa
2007-02-01
We have for the first time developed warm white LEDs lighting using a combination of near ultraviolet LED and three-band (red, green and blue) white phosphors. This LED has the average color-rendering index Ra=96. Moreover, special color-rendering index R9 (red) and R15 (face color of Japanese) are estimated to be 95 and 97, respectively. We will describe the results of evaluation on the medical lighting applications such as operation, treatment and endoscope experiments, application to the LED fashions and application to the Japanese antique art (ink painting) lighting.
Light transport on path-space manifolds
NASA Astrophysics Data System (ADS)
Jakob, Wenzel Alban
The pervasive use of computer-generated graphics in our society has led to strict demands on their visual realism. Generally, users of rendering software want their images to look, in various ways, "real", which has been a key driving force towards methods that are based on the physics of light transport. Until recently, industrial practice has relied on a different set of methods that had comparatively little rigorous grounding in physics---but within the last decade, advances in rendering methods and computing power have come together to create a sudden and dramatic shift, in which physics-based methods that were formerly thought impractical have become the standard tool. As a consequence, considerable attention is now devoted towards making these methods as robust as possible. In this context, robustness refers to an algorithm's ability to process arbitrary input without large increases of the rendering time or degradation of the output image. One particularly challenging aspect of robustness entails simulating the precise interaction of light with all the materials that comprise the input scene. This dissertation focuses on one specific group of materials that has fundamentally been the most important source of difficulties in this process. Specular materials, such as glass windows, mirrors or smooth coatings (e.g. on finished wood), account for a significant percentage of the objects that surround us every day. It is perhaps surprising, then, that it is not well-understood how they can be accommodated within the theoretical framework that underlies some of the most sophisticated rendering methods available today. Many of these methods operate using a theoretical framework known as path space integration. But this framework makes no provisions for specular materials: to date, it is not clear how to write down a path space integral involving something as simple as a piece of glass. Although implementations can in practice still render these materials by side-stepping limitations of the theory, they often suffer from unusably slow convergence; improvements to this situation have been hampered by the lack of a thorough theoretical understanding. We address these problems by developing a new theory of path-space light transport which, for the first time, cleanly incorporates specular scattering into the standard framework. Most of the results obtained in the analysis of the ideally smooth case can also be generalized to rendering of glossy materials and volumetric scattering so that this dissertation also provides a powerful new set of tools for dealing with them. The basis of our approach is that each specular material interaction locally collapses the dimension of the space of light paths so that all relevant paths lie on a submanifold of path space. We analyze the high-dimensional differential geometry of this submanifold and use the resulting information to construct an algorithm that is able to "walk" around on it using a simple and efficient equation-solving iteration. This manifold walking algorithm then constitutes the key operation of a new type of Markov Chain Monte Carlo (MCMC) rendering method that computes lighting through very general families of paths that can involve arbitrary combinations of specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering. We demonstrate our implementation on a range of challenging scenes and evaluate it against previous methods.
Distributed volume rendering and stereoscopic display for radiotherapy treatment planning
NASA Astrophysics Data System (ADS)
Hancock, David J.
The thesis describes attempts to use direct volume rendering techniques to produce visualisations useful in the preparation of radiotherapy treatment plans. The selected algorithms allow the generation of data-rich images which can be used to assist the radiologist in comprehending complicated three-dimensional phenomena. The treatment plans are formulated using a three dimensional model which combines patient data acquired from CT scanning and the results of a simulation of the radiation delivery. Multiple intersecting beams with shaped profiles are used and the region of intersection is designed to closely match the position and shape of the targeted tumour region. The proposed treatment must be evaluated as to how well the target region is enveloped by the high dose occurring where the beams intersect, and also as to whether the treatment is likely to expose non-tumour regions to unacceptably high levels of radiation. Conventionally the plans are reviewed by examining CT images overlaid with contours indicating dose levels. Volume visualisation offers a possible saving in time by presenting the data in three dimensional form thereby removing the need to examine a set of slices. The most difficult aspect is to depict unambiguously the relationships between the different data. For example, if a particular beam configuration results in unintended irradiation of a sensitive organ, then it is essential to ensure that this is clearly displayed, and that the 3D relationships between the beams and other data can be readily perceived in order to decide how to correct the problem. The user interface has been designed to present a unified view of the different techniques available for identifying features of interest within the data. The system differs from those previously reported in that complex visualisations can be constructed incrementally, and several different combinations of features can be viewed simultaneously. To maximise the quantity of relevant data presented in a single view, large regions of the data are rendered very transparently. This is done to ensure that interesting features buried deep within the data are visible from any viewpoint. Rendering images with high degrees of transparency raises a number of problems, primarily the drop in quality of depth cues in the image, but also the increase in computational requirements over surface-based visualisations. One solution to the increase in image generation times is the use of parallel architectures, which are an attractive platform for large visualisation tasks such as this. A parallel implementation of the direct volume rendering algorithm is described and its performance is evaluated. Several issues must be addressed in implementing an interactive rendering system in a distributed computing environment: principally overcoming the latency and limited bandwidth of the typical network connection. This thesis reports a pipelining strategy developed to improve the level of interactivity in such situations. Stereoscopic image presentation offers a method to offset the reduction in clarity of the depth information in the transparent images. The results of an investigation into the effectiveness of stereoscopic display as an aid to perception in highly transparent images are presented. Subjects were shown scenes of a synthetic test data set in which conventional depth cues were very limited. The experiments were designed to discover what effect stereoscopic viewing of the transparent, volume rendered images had on user's depth perception.
Autostereoscopic image creation by hyperview matrix controlled single pixel rendering
NASA Astrophysics Data System (ADS)
Grasnick, Armin
2017-06-01
Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.
Arizzi, Anna; Viles, Heather; Martín-Sanchez, Inés; Cultrone, Giuseppe
2016-01-15
Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp-lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp-lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp-lime mixes. This study has demonstrated that hemp-lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. Copyright © 2015 Elsevier B.V. All rights reserved.
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Fast DRR generation for 2D to 3D registration on GPUs.
Tornai, Gábor János; Cserey, György; Pappas, Ion
2012-08-01
The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.
Rendering potential wearable robot designs with the LOPES gait trainer.
Koopman, B; van Asseldonk, E H F; van der Kooij, H; van Dijk, W; Ronsse, R
2011-01-01
In recent years, wearable robots (WRs) for rehabilitation, personal assistance, or human augmentation are gaining increasing interest. To make these devices more energy efficient, radical changes to the mechanical structure of the device are being considered. However, it remains very difficult to predict how people will respond to, and interact with, WRs that differ in terms of mechanical design. Users may adjust their gait pattern in response to the mechanical restrictions or properties of the device. The goal of this pilot study is to show the feasibility of rendering the mechanical properties of different potential WR designs using the robotic gait training device LOPES. This paper describes a new method that selectively cancels the dynamics of LOPES itself and adds the dynamics of the rendered WR using two parallel inverse models. Adaptive frequency oscillators were used to get estimates of the joint position, velocity, and acceleration. Using the inverse models, different WR designs can be evaluated, eliminating the need to build several prototypes. As a proof of principle, we simulated the effect of a very simple WR that consisted of a mass attached to the ankles. Preliminary results show that we are partially able to cancel the dynamics of LOPES. Additionally, the simulation of the mass showed an increase in muscle activity but not in the same level as during the control, where subjects actually carried the mass. In conclusion, the results in this paper suggest that LOPES can be used to render different WRs. In addition, it is very likely that the results can be further optimized when more effort is put in retrieving proper estimations for the velocity and acceleration, which are required for the inverse models. © 2011 IEEE
Anisotropic scene geometry resampling with occlusion filling for 3DTV applications
NASA Astrophysics Data System (ADS)
Kim, Jangheon; Sikora, Thomas
2006-02-01
Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.
GenExp: an interactive web-based genomic DAS client with client-side data rendering.
Gel Moreno, Bernat; Messeguer Peypoch, Xavier
2011-01-01
The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp.
Raphael, David T; McIntee, Diane; Tsuruda, Jay S; Colletti, Patrick; Tatevossian, Ray
2005-12-01
Magnetic resonance neurography (MRN) is an imaging method by which nerves can be selectively highlighted. Using commercial software, the authors explored a variety of approaches to develop a three-dimensional volume-rendered MRN image of the entire brachial plexus and used it to evaluate the accuracy of infraclavicular block approaches. With institutional review board approval, MRN of the brachial plexus was performed in 10 volunteer subjects. MRN imaging was performed on a GE 1.5-tesla magnetic resonance scanner (General Electric Healthcare Technologies, Waukesha, WI) using a phased array torso coil. Coronal STIR and T1 oblique sagittal sequences of the brachial plexus were obtained. Multiple software programs were explored for enhanced display and manipulation of the composite magnetic resonance images. The authors developed a frontal slab composite approach that allows single-frame reconstruction of a three-dimensional volume-rendered image of the entire brachial plexus. Automatic segmentation was supplemented by manual segmentation in nearly all cases. For each of three infraclavicular approaches (posteriorly directed needle below midclavicle, infracoracoid, or caudomedial to coracoid), the targeting error was measured as the distance from the MRN plexus midpoint to the approach-targeted site. Composite frontal slabs (coronal views), which are single-frame three-dimensional volume renderings from image-enhanced two-dimensional frontal view projections of the underlying coronal slices, were created. The targeting errors (mean +/- SD) for the approaches-midclavicle, infracoracoid, caudomedial to coracoid-were 0.43 +/- 0.67, 0.99 +/- 1.22, and 0.65 +/- 1.14 cm, respectively. Image-processed three-dimensional volume-rendered MNR scans, which allow visualization of the entire brachial plexus within a single composite image, have educational value in illustrating the complexity and individual variation of the plexus. Suggestions for improved guidance during infraclavicular block procedures are presented.
Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju
2017-01-01
Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483
Rifaximin diminishes neutropenia following potentially lethal whole-body radiation.
Jahraus, Christopher D; Schemera, Bettina; Rynders, Patricia; Ramos, Melissa; Powell, Charles; Faircloth, John; Brawner, William R
2010-07-01
Terrorist attacks involving radiological or nuclear weapons are a substantial geopolitical concern, given that large populations could be exposed to potentially lethal doses of radiation. Because of this, evaluating potential countermeasures against radiation-induced mortality is critical. Gut microflora are the most common source of systemic infection following exposure to lethal doses of whole-body radiation, suggesting that prophylactic antibiotic therapy may reduce mortality after radiation exposure. The chemical stability, easy administration and favorable tolerability profile of the non-systemic antibiotic, rifaximin, make it an ideal potential candidate for use as a countermeasure. This study evaluated the use of rifaximin as a countermeasure against low-to-intermediate-dose whole-body radiation in rodents. Female Wistar rats (8 weeks old) were irradiated with 550 cGy to the whole body and were evaluated for 30 d. Animals received methylcellulose, neomycin (179 mg/kg/d) or variably dosed rifaximin (150-2000 mg/kg/d) one hour after irradiation and daily throughout the study period. Clinical assessments (e.g. body weight) were made daily. On postirradiation day 30, blood samples were collected and a complete blood cell count was performed. Animals receiving high doses of rifaximin (i.e. 1000 or 2000 mg/kg/d) had a greater increase in weight from the day of irradiation to postirradiation day 30 compared with animals that received placebo or neomycin. For animals with an increase in average body weight from irradiation day within 80-110% of the group average, methylcellulose rendered an absolute neutrophil count (ANC) of 211, neomycin rendered an ANC of 334, rifaximin 300 mg/kg/d rendered an ANC of 582 and rifaximin 1000 mg/kg/d rendered an ANC of 854 (P = 0.05 for group comparison). Exposure to rifaximin after near-lethal whole-body radiation resulted in diminished levels of neutropenia.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
The Louisiana State University waste-to-energy incinerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-10-26
This proposed action is for cost-shared construction of an incinerator/steam-generation facility at Louisiana State University under the State Energy Conservation Program (SECP). The SECP, created by the Energy Policy and Conservation Act, calls upon DOE to encourage energy conservation, renewable energy, and energy efficiency by providing Federal technical and financial assistance in developing and implementing comprehensive state energy conservation plans and projects. Currently, LSU runs a campus-wide recycling program in order to reduce the quantity of solid waste requiring disposal. This program has removed recyclable paper from the waste stream; however, a considerable quantity of other non-recyclable combustible wastes aremore » produced on campus. Until recently, these wastes were disposed of in the Devil`s Swamp landfill (also known as the East Baton Rouge Parish landfill). When this facility reached its capacity, a new landfill was opened a short distance away, and this new site is now used for disposal of the University`s non-recyclable wastes. While this new landfill has enough capacity to last for at least 20 years (from 1994), the University has identified the need for a more efficient and effective manner of waste disposal than landfilling. The University also has non-renderable biological and potentially infectious waste materials from the School of Veterinary Medicine and the Student Health Center, primarily the former, whose wastes include animal carcasses and bedding materials. Renderable animal wastes from the School of Veterinary Medicine are sent to a rendering plant. Non-renderable, non-infectious animal wastes currently are disposed of in an existing on-campus incinerator near the School of Veterinary Medicine building.« less
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
GenExp: An Interactive Web-Based Genomic DAS Client with Client-Side Data Rendering
Gel Moreno, Bernat; Messeguer Peypoch, Xavier
2011-01-01
Background The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. Results Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. Conclusions GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp. PMID:21750706
Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.
Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia
2017-02-01
The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p < 0.05). Most participants perceived the CRT findings to be more understandable than the VRT findings, but that difference was not statistically significant (p > 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.
Scalable algorithms for 3D extended MHD.
NASA Astrophysics Data System (ADS)
Chacon, Luis
2007-11-01
In the modeling of plasmas with extended MHD (XMHD), the challenge is to resolve long time scales while rendering the whole simulation manageable. In XMHD, this is particularly difficult because fast (dispersive) waves are supported, resulting in a very stiff set of PDEs. In explicit schemes, such stiffness results in stringent numerical stability time-step constraints, rendering them inefficient and algorithmically unscalable. In implicit schemes, it yields very ill-conditioned algebraic systems, which are difficult to invert. In this talk, we present recent theoretical and computational progress that demonstrate a scalable 3D XMHD solver (i.e., CPU ˜N, with N the number of degrees of freedom). The approach is based on Newton-Krylov methods, which are preconditioned for efficiency. The preconditioning stage admits suitable approximations without compromising the quality of the overall solution. In this work, we employ optimal (CPU ˜N) multilevel methods on a parabolized XMHD formulation, which renders the whole algorithm scalable. The (crucial) parabolization step is required to render XMHD multilevel-friendly. Algebraically, the parabolization step can be interpreted as a Schur factorization of the Jacobian matrix, thereby providing a solid foundation for the current (and future extensions of the) approach. We will build towards 3D extended MHDootnotetextL. Chac'on, Comput. Phys. Comm., 163 (3), 143-171 (2004)^,ootnotetextL. Chac'on et al., 33rd EPS Conf. Plasma Physics, Rome, Italy, 2006 by discussing earlier algorithmic breakthroughs in 2D reduced MHDootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) and 2D Hall MHD.ootnotetextL. Chac'on et al., J. Comput. Phys., 188 (2), 573-592 (2003)
3D Volume Rendering and 3D Printing (Additive Manufacturing).
Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T
2018-07-01
Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.
Europa Lander Mission Concept (Artist Rendering)
2017-02-08
This artist's rendering illustrates a conceptual design for a potential future mission to land a robotic probe on the surface of Jupiter's moon Europa. The lander is shown with a sampling arm extended, having previously excavated a small area on the surface. The circular dish on top is a dual-purpose high-gain antenna and camera mast, with stereo imaging cameras mounted on the back of the antenna. Three vertical shapes located around the top center of the lander are attachment points for cables that would lower the rover from a sky crane, which is envisioned as the landing system for this mission concept. http://photojournal.jpl.nasa.gov/catalog/PIA21048
Data Cube Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.; Gárate, Matías
2017-06-01
With the increasing data acquisition rates from observational and computational astrophysics, new tools are needed to study and visualize data. We present a methodology for rendering 3D data cubes using the open-source 3D software Blender. By importing processed observations and numerical simulations through the Voxel Data format, we are able use the Blender interface and Python API to create high-resolution animated visualizations. We review the methods for data import, animation, and camera movement, and present examples of this methodology. The 3D rendering of data cubes gives scientists the ability to create appealing displays that can be used for both scientific presentations as well as public outreach.
High temperature solid electrolyte fuel cell with ceramic electrodes
Marchant, David D.; Bates, J. Lambert
1984-01-01
A solid oxide electrolyte fuel cell is described having a central electrolyte comprised of a HfO.sub.2 or ZrO.sub.2 ceramic stabilized and rendered ionically conductive by the addition of Ca, Mg, Y, La, Nd, Sm, Gd, Dy Er, or Yb. The electrolyte is sandwiched between porous electrodes of a HfO.sub.2 or ZrO.sub.2 ceramic stabilized by the addition of a rare earth and rendered electronically conductive by the addition of In.sub.2 O.sub.3. Alternatively, the anode electrode may be made of a metal such as Co, Ni, Ir Pt, or Pd.
High temperature solid electrolyte fuel cell with ceramic electrodes
Bates, J.L.; Marchant, D.D.
A solid oxide electrolyte fuel cell is described having a central electrolyte comprised of a HfO/sub 2/ or ZrO/sub 2/ ceramic stabilized and rendered ionically conductive by the addition of Ca, Mg, Y, La, Nd, Sm, Gd, Dy Er, or Yb. The electrolyte is sandwiched between porous electrodes of a HfO/sub 2/ or ZrO/sub 2/ ceramic stabilized by the addition of a rare earth and rendered electronically conductive by the addition of In/sub 2/O/sub 3/. Alternatively, the anode electrode may be made of a metal such as Co, Ni, Ir Pt, or Pd.
Visualizing Astronomical Data with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2014-01-01
We present methods for using the 3D graphics program Blender in the visualization of astronomical data. The software's forte for animating 3D data lends itself well to use in astronomy. The Blender graphical user interface and Python scripting capabilities can be utilized in the generation of models for data cubes, catalogs, simulations, and surface maps. We review methods for data import, 2D and 3D voxel texture applications, animations, camera movement, and composite renders. Rendering times can be improved by using graphic processing units (GPUs). A number of examples are shown using the software features most applicable to various kinds of data paradigms in astronomy.
Graphics performance in rich Internet applications.
Hoetzlein, Rama C
2012-01-01
Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated.
Compositions containing amino acids, phosphate and manganese and their uses
Daly, Michael J.; Gaidamakova, Elena K.
2016-01-12
The invention provides methods of producing vaccines directed against microorganisms, with the methods comprising culturing, harvesting and/or suspending the microorganism in the presence of a radiation-protective composition and irradiating the bacteria or viruses with a dose of radiation sufficient to render the microorganism replication-deficient and/or non-infective. The radiation-protective compositions used in the methods of the present invention comprise at least one nucleoside, at least one antioxidant and at least one small peptide. The invention also provides methods of rendering bacteria in culture resistant to ionizing radiation (IR), with these methods comprising culturing the bacteria in the presence of a radiation-protective composition.
Compositions containing amino acids, phosphate and manganese and their uses
Daly, Michael J.; Gaidamakova, Elena K.
2017-09-12
The invention provides methods of producing vaccines directed against microorganisms, with the methods comprising culturing, harvesting and/or suspending the microorganism in the presence of a radiation-protective composition and irradiating the bacteria or viruses with a dose of radiation sufficient to render the microorganism replication-deficient and/or non-infective. The radiation-protective compositions used in the methods of the present invention comprise at least one nucleoside, at least one antioxidant and at least one small peptide. The invention also provides methods of rendering bacteria in culture resistant to ionizing radiation (IR), with these methods comprising culturing the bacteria in the presence of a radiation-protective composition.
Kerogen extraction from subterranean oil shale resources
Looney, Mark Dean; Lestz, Robert Steven; Hollis, Kirk; Taylor, Craig; Kinkead, Scott; Wigand, Marcus
2010-09-07
The present invention is directed to methods for extracting a kerogen-based product from subsurface (oil) shale formations, wherein such methods rely on fracturing and/or rubblizing portions of said formations so as to enhance their fluid permeability, and wherein such methods further rely on chemically modifying the shale-bound kerogen so as to render it mobile. The present invention is also directed at systems for implementing at least some of the foregoing methods. Additionally, the present invention is also directed to methods of fracturing and/or rubblizing subsurface shale formations and to methods of chemically modifying kerogen in situ so as to render it mobile.
Kerogen extraction from subterranean oil shale resources
Looney, Mark Dean [Houston, TX; Lestz, Robert Steven [Missouri City, TX; Hollis, Kirk [Los Alamos, NM; Taylor, Craig [Los Alamos, NM; Kinkead, Scott [Los Alamos, NM; Wigand, Marcus [Los Alamos, NM
2009-03-10
The present invention is directed to methods for extracting a kerogen-based product from subsurface (oil) shale formations, wherein such methods rely on fracturing and/or rubblizing portions of said formations so as to enhance their fluid permeability, and wherein such methods further rely on chemically modifying the shale-bound kerogen so as to render it mobile. The present invention is also directed at systems for implementing at least some of the foregoing methods. Additionally, the present invention is also directed to methods of fracturing and/or rubblizing subsurface shale formations and to methods of chemically modifying kerogen in situ so as to render it mobile.