Science.gov

Sample records for 3d stereo visualization

  1. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  2. 3D panorama stereo visual perception centering on the observers

    NASA Astrophysics Data System (ADS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-09-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality.

  3. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  4. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  5. Intraoperative 3D stereo visualization for image-guided cardiac ablation

    NASA Astrophysics Data System (ADS)

    Azizian, Mahdi; Patel, Rajni

    2011-03-01

    There are commercial products which provide 3D rendered volumes, reconstructed from electro-anatomical mapping and/or pre-operative CT/MR images of a patient's heart with tools for highlighting target locations for cardiac ablation applications. However, it is not possible to update the three-dimensional (3D) volume intraoperatively to provide the interventional cardiologist with more up-to-date feedback at each instant of time. In this paper, we describe the system we have developed for real-time three-dimensional stereo visualization for cardiac ablation. A 4D ultrasound probe is used to acquire and update a 3D image volume. A magnetic tracking device is used to track the distal part of the ablation catheter in real time and a master-slave robot-assisted system is developed for actuation of a steerable catheter. Three-dimensional ultrasound image volumes go through some processing to make the heart tissue and the catheter more visible. The rendered volume is shown in a virtual environment. The catheter can also be added as a virtual tool to this environment to achieve a higher update rate on the catheter's position. The ultrasound probe is also equipped with an EM tracker which is used for online registration of the ultrasound images and the catheter tracking data. The whole augmented reality scene can be shown stereoscopically to enhance depth perception for the user. We have used transthoracic echocardiography (TTE) instead of the conventional transoesophageal (TEE) or intracardiac (ICE) echocardiogram. A beating heart model has been used to perform the experiments. This method can be used both for diagnostic and therapeutic applications as well as training interventional cardiologists.

  6. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  7. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright

    PubMed Central

    Kane, David; Held, Robert T.; Banks, Martin S.

    2012-01-01

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer’s inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer’s head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer’s head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays. PMID:24058723

  8. Visual Discomfort with Stereo 3D Displays when the Head is Not Upright.

    PubMed

    Kane, David; Held, Robert T; Banks, Martin S

    2012-02-09

    Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer's inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer's head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer's head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays.

  9. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  10. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  11. VPython: Python plus Animations in Stereo 3D

    NASA Astrophysics Data System (ADS)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  12. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    NASA Astrophysics Data System (ADS)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  13. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  14. "Stereo Compton cameras" for the 3-D localization of radioisotopes

    NASA Astrophysics Data System (ADS)

    Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.

    2014-11-01

    The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.

  15. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  16. A search for Ganymede stereo images and 3D mapping opportunities

    NASA Astrophysics Data System (ADS)

    Zubarev, A.; Nadezhdina, I.; Brusnikin, E.; Giese, B.; Oberst, J.

    2017-10-01

    We used 126 Voyager-1 and -2 as well as 87 Galileo images of Ganymede and searched for stereo images suitable for digital 3D stereo analysis. Specifically, we consider image resolutions, stereo angles, as well as matching illumination conditions of respective stereo pairs. Lists of regions and local areas with stereo coverage are compiled. We present anaglyphs and we selected areas, not previously discussed, for which we constructed Digital Elevation Models and associated visualizations. The terrain characteristics in the models are in agreement with our previous notion of Ganymede morphology, represented by families of lineaments and craters of various sizes and degradation stages. The identified areas of stereo coverage may serve as important reference targets for the Ganymede Laser Altimeter (GALA) experiment on the future JUICE (Jupiter Icy Moons Explorer) mission.

  17. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  18. The 3D Heliosphere: What Can We Learn from STEREO?

    NASA Technical Reports Server (NTRS)

    Suess, S. T.; Six, N. Frank (Technical Monitor)

    2002-01-01

    Many techniques have been used to study the 3D heliosphere, with the earliest probably being the analysis of comet tails. I will list most of these and mention a few, focusing on existing multi-point studies. The result, from more than 50 years of study, Is that a lot is known. This has led to a good picture of the quasi-steady heliosphere and its relation to the 3D Corona. But, there are also some large gaps and STEREO is designed to address one of these: the timing, size, geometry, mass, speed, direction, and 3D propagation of Corona[ mass ejections (CMEs). In spite of the statistical analysis of a large data archive, Imaginative use of in situ and remote measurements, and extensive modeling, these properties of CMES are poorly known. I will outline an example of how STEREO instruments might work together to develop a far better 30 description of CMEs In the 3D heliosphere and note that other examples are described in the Science Definition Team report and in the Science Objectives given by the four instrument teams. Since the two STEREO spacecraft are not intended to work in isolation, I will also outline how they might be used In combination With ground-based and other spacecraft observations.

  19. Open-GL-based stereo system for 3D measurements

    NASA Astrophysics Data System (ADS)

    Boochs, Frank; Gehrhoff, Anja; Neifer, Markus

    2000-05-01

    A stereo system designed and used for the measurement of 3D- coordinates within metric stereo image pairs will be presented. First, the motivation for the development is shown, allowing to evaluate stereo images. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing the measuring functionality required. The presented stereo system is based on PC-hardware equipped with a graphic board and uses an object oriented programming technique. The specific needs of a measuring system are shown and the corresponding requirements which have to be met by the system. The key role of OpenGL is described, which supplies some elementary graphic functions, being directly supported by graphic boards and thus provides the performance needed. Further important aspects as modularity and hardware independence and their value for the solution are shown. Finally some sample functions concerned with image display and handling are presented in more detail.

  20. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  1. 3D visualization for research and teaching in geosciences

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad

    2010-05-01

    Today, we are provided with an abundance of visual images from a variety of sources. In doing research, data visualization represents an important part, and sophisticated models require special tools that should enhance the comprehension of modeling results. Also, helping our students gain visualization skills is an important way to foster greater comprehension when studying geosciences. For these reasons we build a 3D stereo-visualization system, or a GeoWall, that permits to explore in depth 3D modeling results and provide for students an attractive way for data visualization. In this study, we present the architecture of such low cost system, and how is used. The system consists of three main parts: a DLP-3D capable display, a high performance workstation and several pairs of wireless liquid crystal shutter eyewear. The system is capable of 3D stereo visualization of Google Earth and/or 3D numeric modeling results. Also, any 2D image or movie can be instantly viewed in 3D stereo. Such flexible-easy-to-use visualization system proved to be an essential research and teaching tool.

  2. Depth-viewing-volume increase by collimation of stereo 3-D displays

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Parrish, Russell V.; Williams, Steven P.

    1990-01-01

    Typical stereo 3-D displays are produced using a single-image-source, which is time-multiplexed, to present disparate, directly-viewed views (stereo pairs) of the visual scene to each eye. However, current stereoscopic viewing techniques impose severe restrictions in the effective viewing-volume of the stereo 3-D display. Recent experiments at Langley Research Center determined that the effective region of stereopsis cuing, the depth-viewing volume, increased with increasing viewer-to-screen distances. This increase was also accompanied by a decrease in the field-of-view of the system. It was postulated that collimation of the display source would dramatically increase the depth-viewing volume, as the effective accommodation distance would be near infinity, while maintaining the field-of-view at required levels. The goal of this proof-of-concept effort was to investigate whether or not a dramatic increase in depth-viewing volume for stereo 3-D displays would be provided by the application of collimated optics to the stereo display source.

  3. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  4. On the use of orientation filters for 3D reconstruction in event-driven stereo vision.

    PubMed

    Camuñas-Mesa, Luis A; Serrano-Gotarredona, Teresa; Ieng, Sio H; Benosman, Ryad B; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

  5. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  6. A 3D view of eruptive filaments by STEREO

    NASA Astrophysics Data System (ADS)

    Gosain, Sanjay; Schmieder, Brigitte; Venkatakrishnan, P.; Chandra, Ramesh; Artzner, Guy

    STEREO/SECHI/EUVI A and B observe different views of the eruption of a quiescent filament. We will concentrate on two events: (i) May 20 to 22, 2008 event (A and B separated by 52.4 degrees from each other), and (ii) September 25 to 26, 2009 event (A and B more than 100 degrees from each other. After using different techniques of reconstruction we obtained a 3 dimensional view of untwisted flux ropes in He II 304 Angstrom, with fine structures. The entire disappearance phase lasted more than ten hours. The filament evolved very slowly ( 5 km/s) from a dense structure with a thick spine into fine threads. Individual threads are seen to be oscillating and rising to an altitude of about 150 Mm with velocities of about 100 km/s. The plasma disappears by diffusion in the corona. Weak CME events are recorded by LASCO at the beginning of the disappearance. In this paper we shall present the dynamics of the filament eruptions as viewed in 3D by STEREO using different methods. We shall explore the causes and consequences of the filament disappearance.

  7. User benefits of visualization with 3-D stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Wichansky, Anna M.

    1991-08-01

    The power of today''s supercomputers promises tremendous benefits to users in terms of productivity, creativity, and excitement in computing. A study of a stereoscopic display system for computer workstations was conducted with 20 users and third-party software developers, to determine whether 3-D stereo displays were perceived as better than flat, 2- 1/2D displays. Users perceived more benefits of 3-D stereo in applications such as molecular modeling and cell biology, which involved viewing of complex, abstract, amorphous objects. Users typically mentioned clearer visualization and better understanding of data, easier recognition of form and pattern, and more fun and excitement at work as the chief benefits of stereo displays. Human factors issues affecting the usefulness of stereo included use of 3-D glasses over regular eyeglasses, difficulties in group viewing, lack of portability, and need for better input devices. The future marketability of 3-D stereo displays would be improved by eliminating the need for users to wear equipment, reducing cost, and identifying markets where the abstract display value can be maximized.

  8. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  9. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  10. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  11. Benefits, limitations, and guidelines for application of stereo 3-D display technology to the cockpit environment

    NASA Technical Reports Server (NTRS)

    Williams, Steven P.; Parrish, Russell V.; Busquets, Anthony M.

    1992-01-01

    A survey of research results from a program initiated by NASA Langley Research Center is presented. The program addresses stereo 3-D pictorial displays from a comprehensive standpoint. Human factors issues, display technology aspects, and flight display applications are also considered. Emphasis is placed on the benefits, limitations, and guidelines for application of stereo 3-D display technology to the cockpit environment.

  12. 3D Stereo Data Visualization and Representation

    DTIC Science & Technology

    1994-09-01

    addition, the state of our minds, our psycological make-up, and human factors play a very important role in this process. 2.2.1.1 Ambient Mode and...year, if available (e.g. . limitations or special markings in all capitals (e.g. I Jan 88). Must cite at least the year. NOFORN, REL, ITAR). Block 3

  13. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  14. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  15. 3-D Flyover Visualization of Veil Nebula

    NASA Image and Video Library

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  16. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  17. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  18. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  19. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  20. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  1. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  2. Optimal 3D Viewing with Adaptive Stereo Displays for Advanced Telemanipulation

    NASA Technical Reports Server (NTRS)

    Lee, S.; Lakshmanan, S.; Ro, S.; Park, J.; Lee, C.

    1996-01-01

    A method of optimal 3D viewing based on adaptive displays of stereo images is presented for advanced telemanipulation. The method provides the viewer with the capability of accurately observing a virtual 3D object or local scene of his/her choice with minimum distortion.

  3. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  4. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  5. Recent research results in stereo 3-D pictorial displays at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.

    1990-01-01

    Recent results from a NASA-Langley program which addressed stereo 3D pictorial displays from a comprehensive standpoint are reviewed. The program dealt with human factors issues and display technology aspects, as well as flight display applications. The human factors findings include addressing a fundamental issue challenging the application of stereoscopic displays in head-down flight applications, with the determination that stereoacuity is unaffected by the short-term use of stereo 3D displays. While stereoacuity has been a traditional measurement of depth perception abilities, it is a measure of relative depth, rather than actual depth (absolute depth). Therefore, depth perception effects based on size and distance judgments and long-term stereo exposure remain issues to be investigated. The applications of stereo 3D to pictorial flight displays within the program have repeatedly demonstrated increases in pilot situational awareness and task performance improvements. Moreover, these improvements have been obtained within the constraints of the limited viewing volume available with conventional stereo displays. A number of stereo 3D pictorial display applications are described, including recovery from flight-path offset, helicopter hover, and emulated helmet-mounted display.

  6. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  7. Stereo improves 3D shape discrimination even when rich monocular shape cues are available.

    PubMed

    Lee, Young Lim; Saunders, Jeffrey A

    2011-08-17

    We measured the ability to discriminate 3D shapes across changes in viewpoint and illumination based on rich monocular 3D information and tested whether the addition of stereo information improves shape constancy. Stimuli were images of smoothly curved, random 3D objects. Objects were presented in three viewing conditions that provided different 3D information: shading-only, stereo-only, and combined shading and stereo. Observers performed shape discrimination judgments for sequentially presented objects that differed in orientation by rotation of 0°-60° in depth. We found that rotation in depth markedly impaired discrimination performance in all viewing conditions, as evidenced by reduced sensitivity (d') and increased bias toward judging same shapes as different. We also observed a consistent benefit from stereo, both in conditions with and without change in viewpoint. Results were similar for objects with purely Lambertian reflectance and shiny objects with a large specular component. Our results demonstrate that shape perception for random 3D objects is highly viewpoint-dependent and that stereo improves shape discrimination even when rich monocular shape cues are available.

  8. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  9. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  10. Automatic 3D reconstruction of quasi-planar stereo Scanning Electron Microscopy (SEM) images.

    PubMed

    Roy, S; Meunier, J; Marian, A M; Vidal, F; Brunette, I; Costantino, S

    2012-01-01

    Scanning Electron Microscopy (SEM) is widely used in science to characterize the surface roughness of materials. Three-dimensional information can be obtained with SEM based on stereovision techniques. A stereo pair is typically obtained by tilting the sample by a few degrees. In this paper we present a fully automated method for 3D reconstruction from a SEM stereo pair without any particular constraint. Results are presented for corneal stromal surfaces.

  11. Stereo 3D vision adapter using commercial DIY goods

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Ohara, Takashi

    2009-10-01

    The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.

  12. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  13. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  14. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  15. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  16. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  17. Techniques for interactive 3-D scientific visualization

    SciTech Connect

    Glinert, E.P. . Dept. of Computer Science); Blattner, M.M. Hospital and Tumor Inst., Houston, TX . Dept. of Biomathematics California Univ., Davis, CA . Dept. of Applied Science Lawrence Livermore National Lab., CA ); Becker, B.G. . Dept. of Applied Science Lawrence Livermore National La

    1990-09-24

    Interest in interactive 3-D graphics has exploded of late, fueled by (a) the allure of using scientific visualization to go where no-one has gone before'' and (b) by the development of new input devices which overcome some of the limitations imposed in the past by technology, yet which may be ill-suited to the kinds of interaction required by researchers active in scientific visualization. To resolve this tension, we propose a flat 5-D'' environment in which 2-D graphics are augmented by exploiting multiple human sensory modalities using cheap, conventional hardware readily available with personal computers and workstations. We discuss how interactions basic to 3-D scientific visualization, like searching a solution space and comparing two such spaces, are effectively carried out in our environment. Finally, we describe 3DMOVE, an experimental microworld we have implemented to test out some of our ideas. 40 refs., 4 figs.

  18. Visualizing realistic 3D urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Aaron; Chen, Tuolin; Brunig, Michael; Schmidt, Hauke

    2003-05-01

    Visualizing complex urban environments has been an active research topic due to its wide variety of applications in city planning: road construction, emergency facilities planning, and optimal placement of wireless carrier base stations. Traditional 2D visualizations have been around for a long time but they only provide a schematic line-drawing bird's eye view and are sometimes confusing to understand due to the lack of depth information. Early versions of 3D systems have been developed for very expensive graphics workstations which seriously limited the availability. In this paper we describe a 3D visualization system for a desktop PC which integrates multiple resolutions of data and provides a realistic view of the urban environment.

  19. 3D planar representation of stereo depth images for 3DTV applications.

    PubMed

    Özkalaycı, Burak O; Alatan, A Aydın

    2014-12-01

    The depth modality of the multiview video plus depth (MVD) format is an active research area, whose main objective is to develop depth image based rendering friendly efficient compression methods. As a part of this research, a novel 3D planar-based depth representation is proposed. The planar approximation of multiple depth images are formulated as an energy-based co-segmentation problem by a Markov random field model. The energy terms of this problem are designed to mimic the rate-distortion tradeoff for a depth compression application. A novel algorithm is developed for practical utilization of the proposed planar approximations in stereo depth compression. The co-segmented regions are also represented as layered planar structures forming a novel single-reference MVD format. The ability of the proposed layered planar MVD representation in decoupling the texture and geometric distortions make it a promising approach. Proposed 3D planar depth compression approaches are compared against the state-of-the-art image/video coding standards by objective and visual evaluation and yielded competitive performance.

  20. Photorealistic 3D omni-directional stereo simulator

    NASA Astrophysics Data System (ADS)

    Reiners, Dirk; Cruz-Neira, Carolina; Neumann, Carsten

    2015-03-01

    While a lot of areas in VR have made significant advances, visual rendering in VR is often not quite keeping up with the state of the art. There are many reasons for this, but one way to alleviate some of the issues is by using ray tracing instead of rasterization for image generation. Contrary to popular belief, ray tracing is a realistic, competitive technology nowadays. This paper looks at the pros and cons of using ray tracing and demonstrates the feasibility of employing it using the example of a helicopter flight simulator image generator.

  1. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  2. Online Stereo 3D Simulation in Studying the Spherical Pendulum in Conservative Force Field

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav S.

    2013-01-01

    The current paper aims at presenting a modern e-learning method and tool that is utilized in teaching physics in the universities. An online stereo 3D simulation is used for e-learning mechanics and specifically the teaching of spherical pendulum as part of the General Physics course for students in the universities. This approach was realized on…

  3. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  4. Digital mono- and 3D stereo-photogrammetry for geological and geomorphological mapping

    NASA Astrophysics Data System (ADS)

    Scapozza, Cristian; Schenker, Filippo Luca; Castelletti, Claudio; Bozzini, Claudio; Ambrosi, Christian

    2016-04-01

    The generalization of application of digital tools for managing, mapping and updating geological data have become widely accepted in the last decennia. Despite the increasing quality and availability of digital topographical maps, orthorectified aerial photographs (orthophotos) and high resolution (5 up to 0.5 m) Digital Elevation Models (DEMs), a correct recognition of the kind, the nature and the boundaries of geological formations and geomophological landforms, unconsolidated sedimentary deposits or slope instabilities is often very difficult on conventional two-dimensional (2D) products, in particular in steep zones (rock walls and talus slopes), under the forest cover, for a very complex topography and in deeply urbanised zones. In many cases, photo-interpretative maps drawn only by 2D data sets must be improved by field verifications or, at least, by field oblique photographs. This is logical, because our natural perception of the real world is three-dimensional (3D), which is partially disabled by the application of 2D visualization techniques. Here we present some examples of application of digital mapping based on a 3D visualization (for aerial and satellite images photo-interpretation) or on a terrestrial perception by digital mono-photogrammetry (for oblique photographs). The 3D digital mapping was performed thanks to an extension of the software ESRI® ArcGIS™ called ArcGDS™. This methodology was also applied on historical aerial photographs (normally analysed by optical stereo-photogrammetry), which were digitized by scanning and then oriented and aero-triangulated thanks to the ArcGDS™ software, allowing the 3D visualisation and the mapping in a GIS environment (Ambrosi and Scapozza, 2015). The mono-photogrammetry (or monoplotting) is the technique of photogrammetrical georeferentiation of single oblique unrectified photographs, which are related to a DEM. In other words, the monoplotting allows relating each pixel of the photograph to the

  5. 3D Visualization of Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Schaefer, John A.

    2014-01-01

    Aerodynamicists and biologists have long recognized the benefits of formation flight. When birds or aircraft fly in the upwash region of the vortex generated by leaders in a formation, induced drag is reduced for the trail bird or aircraft, and efficiency improves. The major consequence of this is that fuel consumption can be greatly reduced. When two aircraft are separated by a large enough longitudinal distance, the aircraft are said to be flying in a cooperative trajectory. A simulation has been developed to model autonomous cooperative trajectories of aircraft; however it does not provide any 3D representation of the multi-body system dynamics. The topic of this research is the development of an accurate visualization of the multi-body system observable in a 3D environment. This visualization includes two aircraft (lead and trail), a landscape for a static reference, and simplified models of the vortex dynamics and trajectories at several locations between the aircraft.

  6. Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs

    NASA Astrophysics Data System (ADS)

    Coenen, M.; Rottensteiner, F.; Heipke, C.

    2017-05-01

    The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).

  7. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  8. Conformal 3D visualization for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Haker, Steven; Angenent, Sigurd; Tannenbaum, Allen R.; Kikinis, Ron

    2000-04-01

    In this paper, we propose a new 3D visualization technique for virtual colonoscopy. Such visualization methods could have a major impact since they have the potential for non-invasively determining the presence of polyps and other pathologies. We moreover demonstrate a method which presents a surface scan of the entire colon as a cine, and affords the viewer the opportunity to examine each point on the surface without distortion. We use the theory of conformal mappings from differential geometry in order to derive an explicit method for flattening surfaces obtained from 3D colon computerized tomography (CT) imagery. Indeed, we describe a general finite element method based on a discretization of the Laplace- Beltrami operator for flattening a surface onto the plane in an angle preserving manner. We also provide simple formulas which may be used in a real time cine to correct for distortion. We apply our method to 3D colon CT data provided to us by the Surgical Planning Laboratory of Brigham and Women's Hospital. We show how the conformal nature of the flattening function provides a flattened representation of the colon which is similar in appearance to the original. Finally, we indicate a few frames of a distortion correcting cine which can be used to examine the entire colon surface.

  9. "Building" 3D visualization skills in mineralogy

    NASA Astrophysics Data System (ADS)

    Gaudio, S. J.; Ajoku, C. N.; McCarthy, B. S.; Lambart, S.

    2016-12-01

    Studying mineralogy is fundamental for understanding the composition and physical behavior of natural materials in terrestrial and extraterrestrial environments. However, some students struggle and ultimately get discouraged with mineralogy course material because they lack well-developed spatial visualization skills that are needed to deal with three-dimensional (3D) objects, such as crystal forms or atomic-scale structures, typically represented in two-dimensional (2D) space. Fortunately, spatial visualization can improve with practice. Our presentation demonstrates a set of experiential learning activities designed to support the development and improvement of spatial visualization skills in mineralogy using commercially available magnetic building tiles, rods, and spheres. These instructional support activities guide students in the creation of 3D models that replicate macroscopic crystal forms and atomic-scale structures in a low-pressure learning environment and at low cost. Students physically manipulate square and triangularly shaped magnetic tiles to build 3D open and closed crystal forms (platonic solids, prisms, pyramids and pinacoids). Prismatic shapes with different closing forms are used to demonstrate the relationship between crystal faces and Miller Indices. Silica tetrahedra and octahedra are constructed out of magnetic rods (bonds) and spheres (oxygen atoms) to illustrate polymerization, connectivity, and the consequences for mineral formulae. In another activity, students practice the identification of symmetry elements and plane lattice types by laying magnetic rods and spheres over wallpaper patterns. The spatial visualization skills developed and improved through our experiential learning activities are critical to the study of mineralogy and many other geology sub-disciplines. We will also present pre- and post- activity assessments that are aligned with explicit learning outcomes.

  10. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  11. Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Kayo; Watabe, Kenji; Fujinaga, Tetsuji; Iijima, Hideki; Tsujii, Masahiko; Takahashi, Hideya; Takehara, Tetsuo; Yamada, Kenji

    2017-02-01

    Because the view angle of the endoscope is narrow, it is difficult to get the whole image of the digestive tract at once. If there are more than two lesions in the digestive tract, it is hard to understand the 3D positional relationship among the lesions. Virtual endoscopy using CT is a present standard method to get the whole view of the digestive tract. Because the virtual endoscopy is designed to detect the irregularity of the surface, it cannot detect lesions that lack irregularity including early cancer. In this study, we propose a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. The method is as follows: 1) capture sequential images of the digestive tract by moving the endoscope, 2) reconstruct 3D surface pattern for each frame by stereo images, 3) estimate the position of the endoscope by image analysis, 4) reconstitute the entire image of the digestive tract by combining the 3D surface pattern. To confirm the validity of this method, we experimented with a straight tube inside of which circles were allocated at equal distance of 20 mm. We captured sequential images and the reconstituted image of the tube revealed that the distance between each circle was 20.2 +/- 0.3 mm (n=7). The results suggest that this method of endoscopic entire 3D image acquisition may help us understand 3D positional relationship among the lesions such as early esophageal cancer that cannot be detected by virtual endoscopy using CT.

  12. Visual discomfort caused by color asymmetry in 3D displays

    NASA Astrophysics Data System (ADS)

    Chen, Zaiqing; Huang, Xiaoqiao; Tai, Yonghan; Shi, Junsheng; Yun, Lijun

    2016-10-01

    Color asymmetry is a common phenomenon in 3D displays, which can cause serious visual discomfort. To ensure safe and comfortable stereo viewing, the color difference between the left and right eyes should not exceed a threshold value, named comfortable color difference limit (CCDL). In this paper, we have experimentally measured the CCDL for five sample color points which were selected from the 1976 CIE u'v' chromaticity diagram. By human observers viewing brief presentations of color asymmetry image pairs, a psychophysical experiment is conducted. As the color asymmetry image pairs, left and right circular patches are horizontally adjusted on image pixels with five levels of disparities: 0, ±60, ±120 arc minutes, along six color directions. The experimental results showed that CCDLs for each sample point varied with the level of disparity and color direction. The minimum of CCDL is 0.019Δu' v' , and the maximum of CCDL is 0.133 Δu' v'. The database collected in this study might help 3D system design and 3D content creation.

  13. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  14. Perceptual biases and cue weighting in perception of 3D slant from texture and stereo information.

    PubMed

    Saunders, Jeffrey A; Chen, Zhongting

    2015-02-10

    Multiple cues are typically available for perceiving the 3D slant of surfaces, and slant perception has been used as a test case for investigating cue integration. Previous evidence suggests that texture and stereo slant cues contribute in an optimal Bayesian manner. We tested whether a Bayesian model could also account for perceptual underestimation of slant from texture. One explanation proposed by Todd, Christensen, and Guckes (2010) is that slant from texture is based on an inaccurate optical variable. An alternative Bayesian explanation is that perceptual underestimation is due to the influence of frontal cues and/or a frontal prior, which is weighted according to the reliability of slant cues. We measured slant perception using a hand-alignment task for conditions that provided only texture, only stereo, or combined texture and stereo cues. Slant estimates from monocular texture showed large biases toward frontal, with proportionally more underestimation at low slants than high slants. Slant estimates from stereo alone were more accurate, and adding texture information did not reduce accuracy. These results are consistent with a frontal influence that is decreasingly weighted as slant information becomes more reliable. We also included conditions with small cue conflicts to measure the relative weighting of texture and stereo cues. Consistent with previous studies, texture had a significant effect on slant estimates in binocular conditions, and the relative weighting of texture increased with slant. In some cases, perceived slant from combined stereo and texture cues was higher than from either cue in isolation. Both the perceptual biases and the cue weights were generally consistent with a Bayesian model that optimally integrates texture and stereo slant cues with frontal cues and/or a frontal prior. © 2015 ARVO.

  15. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  16. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  17. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  18. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  19. Integration of multiple-baseline color stereo vision with focus and defocus analysis for 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Yuan, Ta; Subbarao, Murali

    1998-12-01

    A 3D vision system named SVIS is developed for 3D shape measurement that integrates three methods: (i) multiple- baseline, multiple-resolution Stereo Image Analysis (SIA) that uses colore image data, (ii) Image Defocus Analysis (IDA), and (iii) Image Focus Analysis (IFA). IDA and IFA are less accurate than stereo but they do not suffer from the correspondence problem associated with stereo. A rough 3D shape is first obtained using IDA and then IFA is used to obtain an improved estimate. The result is then used in SIA to solve the correspondence problem and obtain an accurate measurement of 3D shape. SIA is implemented using color images recorded at multiple-baselines. Color images provide more information than monochrome images for stereo matching. Therefore matching errors are reduced and accuracy of 3D shape is improved. Further improvements are obtained through multiple-baseline stereo analysis. First short baseline images are analyzed to obtain an initial estimate of 3D shape. In this step, stereo matching errors are low and computation is fast since a shorter baseline result in lower disparities. The initial estimate of 3D shape is used to match longer baseline stereo images. This yields more accurate estimation of 3D shape. The stereo matching step is implemented using a multiple-resolution matching approach to reduce computation. First lower resolution images are matched and the result are used in matching higher resolution images. This paper presented the algorithms and the experimental result of 3D shape measurements on SVIS for several objects. These results suggest a practical vision system for 3D shape measurement.

  20. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  1. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  2. 3-D Visualizations At (Almost) No Expense

    NASA Astrophysics Data System (ADS)

    Sedlock, R. L.

    2003-12-01

    Like most teaching-oriented public universities, San José State University (part of the California State University system) currently faces severe budgetary constraints. These circumstances prohibit the construction of one or more Geo-Walls on-campus. Nevertheless, the Department of Geology has pursued alternatives that enable our students to benefit from 3-D visualizations such as those used with the Geo-Wall. This experience - a sort of virtual virtuality - depends only on the availability of a computer lab and an optional plotter. Starting in June 2003, we have used the methods described here with two diverse groups of participants: middle- and high-school teachers taking professional development workshops through grants funded by NSF and NASA, and regular university students enrolled in introductory earth science and geology laboratory courses. We use two types of three-dimensional images with our students: visualizations from the on-line Gallery of Virtual Topography (Steve Reynolds), and USGS digital topographic quadrangles that have been transformed into anaglyph files for viewing with 3-D glasses. The procedure for transforming DEMs into these anaglyph files, developed by Paul Morin, is available at http://geosun.sjsu.edu/~sedlock/anaglyph.html. The resulting images can be used with students in one of two ways. First, maps can be printed on a suitable plotter, laminated (optional but preferable), and used repeatedly with different classes. Second, the images can be viewed in school computer labs or by students on their own computers. Chief advantages of the plotter option are (1) full-size maps (single or tiled) viewable in their entirety, and (2) dependability (independent of Internet connections and electrical power). Chief advantages of the computer option are (1) minimal preparation time and no other needed resources, assuming a computer lab with Internet access, and (2) students can work with the images outside of regularly scheduled courses. Both

  3. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  4. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408

  5. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  6. Comparison of interferometric and stereo-radargrammetric 3D metrics in mapping of forest resources

    NASA Astrophysics Data System (ADS)

    Karila, K.; Karjalainen, M.; Yu, X.; Vastaranta, M.; Holopainen, M.; Hyyppa, J.

    2015-04-01

    Accurate forest resources maps are needed in diverse applications ranging from the local forest management to the global climate change research. In particular, it is important to have tools to map changes in forest resources, which helps us to understand the significance of the forest biomass changes in the global carbon cycle. In the task of mapping changes in forest resources for wide areas, Earth Observing satellites could play the key role. In 2013, an EU/FP7-Space funded project "Advanced_SAR" was started with the main objective to develop novel forest resources mapping methods based on the fusion of satellite based 3D measurements and in-situ field measurements of forests. During the summer 2014, an extensive field surveying campaign was carried out in the Evo test site, Southern Finland. Forest inventory attributes of mean tree height, basal area, mean stem diameter, stem volume, and biomass, were determined for 91 test plots having the size of 32 by 32 meters (1024 m2). Simultaneously, a comprehensive set of satellite and airborne data was collected. Satellite data also included a set of TanDEM-X (TDX) and TerraSAR-X (TSX) X-band synthetic aperture radar (SAR) images, suitable for interferometric and stereo-radargrammetric processing to extract 3D elevation data representing the forest canopy. In the present study, we compared the accuracy of TDX InSAR and TSX stereo-radargrammetric derived 3D metrics in forest inventory attribute prediction. First, 3D data were extracted from TDX and TSX images. Then, 3D data were processed as elevations above the ground surface (forest canopy height values) using an accurate Digital Terrain Model (DTM) based on airborne laser scanning survey. Finally, 3D metrics were calculated from the canopy height values for each test plot and the 3D metrics were compared with the field reference data. The Random Forest method was used in the forest inventory attributes prediction. Based on the results InSAR showed slightly better

  7. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  8. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  9. Stereo visualization in the ground segment tasks of the science space missions

    NASA Astrophysics Data System (ADS)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  10. VISUAL3D - An EIT network on visualization of geomodels

    NASA Astrophysics Data System (ADS)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  11. Research on 3D reconstruction measurement and parameter of cavitation bubble based on stereo vision

    NASA Astrophysics Data System (ADS)

    Li, Shengyong; Ai, Xiaochuan; Wu, Ronghua; Cao, Jing

    2017-02-01

    The problems caused by the cavitation bubble and caused many adverse effects on the ship propeller, hydraulic machinery and equipment. In order to research the production mechanism of cavitation bubble under different conditions, cavitation bubble zone parameter fine measurement and analysis technology is indispensable, this paper adopts a non-contact measurement method of optical autonomous construction of binocular stereo vision measurement system according to the characteristics of cavitation bubble, the texture features are not clear, transparent and difficult to obtain, 3D imaging measurement of cavitation bubble using composite dynamic lighting, and 3D reconstruction of cavitation bubble region and obtained the characteristics of more accurate parameters, test results show that the cavitation bubble characteristics of the fine technology can obtain and analyze cavitation bubble region and instability.

  12. 3D reconstruction in laparoscopy with close-range photometric stereo.

    PubMed

    Collins, Toby; Bartoli, Adrien

    2012-01-01

    In this paper we present the first solution to 3D reconstruction in monocular laparoscopy using methods based on Photometric Stereo (PS). Our main contributions are to provide the new theory and practical solutions to successfully apply PS in close-range imaging conditions. We are specifically motivated by a solution with minimal hardware modification to existing laparoscopes. In fact the only physical modification we make is to adjust the colour of the laparoscope's illumination via three colour filters placed at its tip. Once calibrated, our approach can compute 3D from a single image, does not require correspondence estimation, and computes absolute depth densely. We demonstrate the potential of our approach with ground truth ex-vivo and in-vivo experimentation.

  13. Multimodal human verification using stereo-based 3D inforamtion, IR, and speech

    NASA Astrophysics Data System (ADS)

    Park, Changhan

    2007-04-01

    In this paper, we propose a personal verification method using 3D face information, infrared (IR), and speech to improve the rate of single biometric authentication. False acceptance rate (FAR) and false rejection rate (FRR) have been a fundamental bottleneck of real-time personal verification. Proposed method uses principal component analysis (PCA) for face recognition and hidden markov model (HMM) for speech recognition based on stereo acquisition system with IR imagery. 3D face information acquires face's depth and distance using a stereo system. The proposed system consists of eye detection, facial pose direction estimation, and PCA modules. An IR image of the human face presents its unique heat-signature and can be used for recognition. IR images use only for decision whether human face or not. It also uses fuzzy logic for the final decision of personal verification. Based on experimental results, the proposed system can reduce FAR which provides that the proposed method overcomes the limitation of single biometric system and provides stable person authentication in real-time.

  14. Spacetime Stereo and 3D Flow via Binocular Spatiotemporal Orientation Analysis.

    PubMed

    Sizintsev, Mikhail; Wildes, Richard P

    2014-11-01

    This paper presents a novel approach to recovering estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. The approach is based on matching spatiotemporal orientation distributions between left and right temporal image streams, which encapsulates both local spatial and temporal structure for disparity estimation. By capturing spatial and temporal structure in this unified fashion, both sources of information combine to yield disparity estimates that are naturally temporal coherent, while helping to resolve matches that might be ambiguous when either source is considered alone. Further, by allowing subsets of the orientation measurements to support different disparity estimates, an approach to recovering multilayer disparity from spacetime stereo is realized. Similarly, the matched distributions allow for direct recovery of dense, robust estimates of 3D scene flow. The approach has been implemented with real-time performance on commodity GPUs using OpenCL. Empirical evaluation shows that the proposed approach yields qualitatively and quantitatively superior estimates in comparison to various alternative approaches, including the ability to provide accurate multilayer estimates in the presence of (semi)transparent and specular surfaces.

  15. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  16. Novel Approaches in 3D Sensing, Imaging, and Visualization

    NASA Astrophysics Data System (ADS)

    Schulein, Robert; Daneshpanah, M.; Cho, M.; Javidi, B.

    Three-dimensional (3D) imaging systems are being researched extensively for purposes of sensing and visualization in fields as diverse as defense, medical, art, and entertainment. When compared to traditional 2D imaging techniques, 3D imaging offers advantages in ranging, robustness to scene occlusion, and target recognition performance. Amongst the myriad 3D imaging techniques, 3D multiperspective imaging technologies have received recent attention due to the technologies' relatively low cost, scalability, and passive sensing capabilities. Multiperspective 3D imagers collect 3D scene information by recording 2D intensity information from multiple perspectives, thus retaining both ray intensity and angle information. Three novel developments in 3D sensing, imaging, and visualization systems are presented: 3D imaging with axially distributed sensing, 3D optical profilometry, and occluded 3D object tracking.

  17. Workbench for 3D target detection and recognition from airborne motion stereo and ladar imagery

    NASA Astrophysics Data System (ADS)

    Roy, Simon; Se, Stephen; Kotamraju, Vinay; Maheux, Jean; Nadeau, Christian; Larochelle, Vincent; Fournier, Jonathan

    2010-04-01

    3D imagery has a well-known potential for improving situational awareness and battlespace visualization by providing enhanced knowledge of uncooperative targets. This potential arises from the numerous advantages that 3D imagery has to offer over traditional 2D imagery, thereby increasing the accuracy of automatic target detection (ATD) and recognition (ATR). Despite advancements in both 3D sensing and 3D data exploitation, 3D imagery has yet to demonstrate a true operational gain, partly due to the processing burden of the massive dataloads generated by modern sensors. In this context, this paper describes the current status of a workbench designed for the study of 3D ATD/ATR. Among the project goals is the comparative assessment of algorithms and 3D sensing technologies given various scenarios. The workbench is comprised of three components: a database, a toolbox, and a simulation environment. The database stores, manages, and edits input data of various types such as point clouds, video, still imagery frames, CAD models and metadata. The toolbox features data processing modules, including range data manipulation, surface mesh generation, texture mapping, and a shape-from-motion module to extract a 3D target representation from video frames or from a sequence of still imagery. The simulation environment includes synthetic point cloud generation, 3D ATD/ATR algorithm prototyping environment and performance metrics for comparative assessment. In this paper, the workbench components are described and preliminary results are presented. Ladar, video and still imagery datasets collected during airborne trials are also detailed.

  18. New computational control techniques and increased understanding for stereo 3-D displays

    NASA Technical Reports Server (NTRS)

    Williams, Steven P.; Parrish, Russell V.

    1990-01-01

    While conventional asymptotic transformations for mapping a visual scene onto a stereo viewing volume allow a single, specific scene-distance to be fixed at the screen location, the present piecewise linear approach allows creative partitioning of the depth viewing volume and affords the freedom to place depth-cueing emphasis wherever desired. Attention is given to the results of an experiment with the novel system which attempted to ascertain the effective region of stereopsis cueing. A practical viewing volume falls between -25 and +60 percent of the viewer-to-screen distance. The data indicate that increased viewer-to-CRT distances furnish increasing usable depth.

  19. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  20. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  1. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    different real video sequences of large-scale 3D scenes to show the accuracy and effectiveness of the representation. Applications include airborne or ground...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect...stereo mosaics of static scenes. These results are mainly presented in Sections 3 and 4. Second, an effective and efficient patch-based stereo

  2. 3-D sensor using relative stereo method for bio-seedlings transplanting system

    NASA Astrophysics Data System (ADS)

    Hiroyasu, Takehisa; Hayashi, Jun'ichiro; Hojo, Hirotaka; Hata, Seiji

    2005-12-01

    In the plant factory of crone seedlings, most of the production processes are highly automated, but the transplanting process of the small seedlings is hard to be automated because the figures of small seedlings are not stable and to handle the seedlings it is required to observe the shapes of the small seedlings. Here, a 3-D vision system for robot to be used for the transplanting process in a plant factory has been introduced. This system has been employed relative stereo method and slit light measuring method and it can detect the shape of small seedlings and decides the cutting point. In this paper, the structure of the vision system and the image processing method for the system is explained.

  3. Graphics for Stereo Visualization Theater for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Antipuesto, Joel; Reid, Lisa (Technical Monitor)

    1998-01-01

    The Stereo Visualization Theater is a high-resolution graphics demonstration that prides a review of current research being performed at NASA. Using a stereoscopic projection, multiple participants can explore scientific data in new ways. The pre-processed audio and video are being played in real-time off of a workstation. A stereo graphics filter for the projector and passive polarized glasses worn by audience members are used to create the stereo effect.

  4. 3D visualization of port simulation.

    SciTech Connect

    Horsthemke, W. H.; Macal, C. M.; Nevins, M. R.

    1999-06-14

    Affordable and realistic three dimensional visualization technology can be applied to large scale constructive simulations such as the port simulation model, PORTSIM. These visualization tools enhance the experienced planner's ability to form mental models of how seaport operations will unfold when the simulation model is implemented and executed. They also offer unique opportunities to train new planners not only in the use of the simulation model but on the layout and design of seaports. Simulation visualization capabilities are enhanced by borrowing from work on interface design, camera control, and data presentation. Using selective fidelity, the designers of these visualization systems can reduce their time and efforts by concentrating on those features which yield the most value for their simulation. Offering the user various observational tools allows the freedom to simply watch or engage in the simulation without getting lost. Identifying the underlying infrastructure or cargo items with labels can provide useful information at the risk of some visual clutter. The PortVis visualization expands the PORTSIM user base which can benefit from the results provided by this capability, especially in strategic planning, mission rehearsal, and training. Strategic planners will immediately reap the benefits of seeing the impact of increased throughput visually without keeping track of statistical data. Mission rehearsal and training users will have an effective training tool to supplement their operational training exercises which are limited in number because of their high costs. Having another effective training modality in this visualization system allows more training to take place and more personnel to gain an understanding of seaport operations. This simulation and visualization training can be accomplished at lower cost than would be possible for the operational training exercises alone. The application of PORTSIM and PortVis will lead to more efficient

  5. 3D foveated visualization on the Web

    NASA Astrophysics Data System (ADS)

    Schermann, John; Barron, John L.; Gargantini, Irene A.

    2000-12-01

    Recent developments in Internet technology, combined with the computerization of hospital radiology departments, allow the remote viewing of medical data images. Generally, however, medical images are data intensive and the transmission of such images over a network can consumer large amounts of network resources. Previous work by Liptay et al, presented an interactive, progressive program (implemented in JAVA and requiring a web browser) that allowed the transmission of multi-resolution JPEG image data using various ROI (Region of Interest) strategies in order to minimize Internet bandwidth requirements. This work handles both 2D and 3D image data, but 3D data was treated as a sequence of 2D images, where each 2D image had to be individually requested by the system. The work described in this paper replaces the representation of 3D data as a 2D JPEG image sequence with a single block of lossy 3D image data compressed using wavelets. In a similar fashion, 2D image data is wavelet compressed. Wavelet decomposition has been shown to have consistently better image quality at high compression ratios than other lossy compression methods. We use wavelet compression in a JAVA application program on the server side to construct a lossy low resolution version of the data. As well, high resolution difference sub-blocks of data are also created by the JAVA application; a difference sub-block and the corresponding low resolution lossless data. Transmitting the low resolution image and difference sub-blocks (as requested) only requires a small fraction of the network bandwidth compared to that which would otherwise be needed to transmit the entire lossless data set. The user, via a JAVA applet on the client side, is provided with a number of methods to choose a trajectory (sequence) of regions of interest in the low resolution image. Once the region(s) of interest are chosen, the sub-blocks of image data in the various trajectories are then retrieved and integrated into the low

  6. 3-D visualization of geologic structures and processes

    NASA Astrophysics Data System (ADS)

    Pflug, R.; Klein, H.; Ramshorn, Ch.; Genter, M.; Stärk, A.

    Interactive 3-D computer graphics techniques are used to visualize geologic structures and simulated geologic processes. Geometric models that serve as input to 3-D viewing programs are generated from contour maps, from serial sections, or directly from simulation program output. Choice of viewing parameters strongly affects the perception of irregular surfaces. An interactive 3-D rendering program and its graphical user interface provide visualization tools for structural geology, seismic interpretation, and visual post-processing of simulations. Dynamic display of transient ground-water simulations and sedimentary process simulations can visualize processes developing through time.

  7. Visual discomfort prediction for stereo contents

    NASA Astrophysics Data System (ADS)

    He, Shan; Zhang, Tao; Doyen, Didier

    2011-03-01

    The current renaissance of 3D movies has drawn more and more attention from the audience. Three-dimensional television (3DTV) has been expected to be the next advance in television. Studies have shown that different people have different comfort range of depth in a 3D content, especially in 3DTV scenario, wherein much smaller screen sizes and viewing distances in home setup than in theater put more restrictions on the 3D content fed into the 3DTV. As a result, the version of the 3D content sent to home will not satisfy all the people in one family. In this paper, we try to solve this problem by providing a prediction of viewing discomfort of certain input content by certain viewer. Our method is based on the Disparity Discomfort Profile (DDP) built through subjective test for each viewer. The input content is analyzed by studying its disparity distribution. The prediction of discomfort is performed by matching the disparity distribution with the viewer's DDP. Then a mechanism to allow the viewers to adjust the depth range according to their visual comfort profile or viewing preference is used to minimize the discomfort. Experiments show promising results of the proposed method.

  8. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  9. Visualization of 3D Geological Models on Google Earth

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Um, J.; Park, M.

    2013-05-01

    Google Earth combines satellite imagery, aerial photography, thematic maps and various data sets to make a three-dimensional (3D) interactive image of the world. Currently, Google Earth is a popular visualization tool in a variety of fields and plays an increasingly important role not only for private users in daily life, but also for scientists, practitioners, policymakers and stakeholders in research and application. In this study, a method to visualize 3D geological models on Google Earth is presented. COLLAborative Design Activity (COLLADA, an open standard XML schema for establishing interactive 3D applications) was used to represent different 3D geological models such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes (a set of triangles connected by their common edges or corners). In addition, we designed Keyhole Markup Language (KML, the XML-based scripting language of Google Earth) codes to import the COLLADA files into the 3D render window of Google Earth. The method was applied to the Grosmont formation in Alberta, Canada. The application showed that the combination of COLLADA and KML enables Google Earth to effectively visualize 3D geological structures and properties.; Visualization of the (a) boreholes, (b) fence sections, (c) 3D volume model and (d) 3D grid model of Grossmont formation on Google Earth

  10. Visualization of 3D Geological Data using COLLADA and KML

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Um, Jeong-Gi; Park, Myong-Ho

    2013-04-01

    This study presents a method to visualize 3D geological data using COLLAborative Design Activity(COLLADA, an open standard XML schema for establishing interactive 3D applications) and Keyhole Markup Language(KML, the XML-based scripting language of Google Earth).We used COLLADA files to represent different 3D geological data such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes(a set of triangles connected by their common edges or corners). The COLLADA files were imported into the 3D render window of Google Earth using KML codes. An application to the Grosmont formation in Alberta, Canada showed that the combination of COLLADA and KML enables Google Earth to visualize 3D geological structures and properties.

  11. Dependence of the Peak Fluxes of Solar Energetic Particles on CME 3D Parameters from STEREO and SOHO

    NASA Astrophysics Data System (ADS)

    Park, Jinhye; Moon, Y.-J.; Lee, Harim

    2017-07-01

    We investigate the relationships between the peak fluxes of 18 solar energetic particle (SEP) events and associated coronal mass ejection (CME) 3D parameters (speed, angular width, and separation angle) obtained from SOHO, and STEREO-A/B for the period from 2010 August to 2013 June. We apply the STEREO CME Analysis Tool (StereoCAT) to the SEP-associated CMEs to obtain 3D speeds and 3D angular widths. The separation angles are determined as the longitudinal angles between flaring regions and magnetic footpoints of the spacecraft, which are calculated by the assumption of a Parker spiral field. The main results are as follows. (1) We find that the dependence of the SEP peak fluxes on CME 3D speed from multiple spacecraft is similar to that on CME 2D speed. (2) There is a positive correlation between SEP peak flux and 3D angular width from multiple spacecraft, which is much more evident than the relationship between SEP peak flux and 2D angular width. (3) There is a noticeable anti-correlation (r = -0.62) between SEP peak flux and separation angle. (4) The multiple-regression method between SEP peak fluxes and CME 3D parameters shows that the longitudinal separation angle is the most important parameter, and the CME 3D speed is secondary on SEP peak flux.

  12. 3-D visualization in biomedical applications.

    PubMed

    Robb, R A

    1999-01-01

    Visualizable objects in biology and medicine extend across a vast range of scale, from individual molecules and cells through the varieties of tissue and interstitial interfaces to complete organs, organ systems, and body parts. These objects include functional attributes of these systems, such as biophysical, biomechanical, and physiological properties. Visualization in three dimensions of such objects and their functions is now possible with the advent of high-resolution tomographic scanners and imaging systems. Medical applications include accurate anatomy and function mapping, enhanced diagnosis, accurate treatment planning and rehearsal, and education/training. Biologic applications include study and analysis of structure-to-function relationships in individual cells and organelles. The potential for revolutionary innovation in the practice of medicine and in biologic investigations lies in direct, fully immersive, real-time multisensory fusion of real and virtual information data streams into online, real-time visualizations available during actual clinical procedures or biological experiments. Current high-performance computing, advanced image processing, and high-fidelity rendering capabilities have facilitated major progress toward realization of these goals. With these advances in hand, there are several important applications of three-dimensional visualization that will have a significant impact on the practice of medicine and on biological research.

  13. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  14. Advanced 3D visualization in student-centred medical education.

    PubMed

    Silén, Charlotte; Wirell, Staffan; Kvist, Joanna; Nylander, Eva; Smedby, Orjan

    2008-06-01

    Healthcare students have difficulties achieving a conceptual understanding of 3D anatomy and misconceptions about physiological phenomena are persistent and hard to address. 3D visualization has improved the possibilities of facilitating understanding of complex phenomena. A project was carried out in which high quality 3D visualizations using high-resolution CT and MR images from clinical research were developed for educational use. Instead of standard stacks of slices (original or multiplanar reformatted) volume-rendering images in the quicktime VR format that enables students to interact intuitively were included. Based on learning theories underpinning problem based learning, 3D visualizations were implemented in the existing curricula of the medical and physiotherapy programs. The images/films were used in lectures, demonstrations and tutorial sessions. Self-study material was also developed. To support learning efficacy by developing and using 3D datasets in regular health care curricula and enhancing the knowledge about possible educational value of 3D visualizations in learning anatomy and physiology. Questionnaires were used to investigate the medical and physiotherapy students' opinions about the different formats of visualizations and their learning experiences. The 3D images/films stimulated the students will to understand more and helped them to get insights about biological variations and different organs size, space extent and relation to each other. The virtual dissections gave a clearer picture than ordinary dissections and the possibility to turn structures around was instructive. 3D visualizations based on authentic, viable material point out a new dimension of learning material in anatomy, physiology and probably also pathophysiology. It was successful to implement 3D images in already existing themes in the educational programs. The results show that deeper knowledge is required about students' interpretation of images/films in relation to

  15. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  16. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  17. [3D visualization and information interaction in biomedical applications].

    PubMed

    Pu, F; Fan, Y; Jiang, W; Zhang, M; Mak, A F; Chen, J

    2001-06-01

    3D visualization and virtual reality are important trend in the development of modern science and technology, and as well in the studies on biomedical engineering. This paper presents a computer procedure developed for 3D visualization in biomedical applications. The biomedical models are constructed in slice sequences based on polygon cells and information interaction is realized on the basis of OpenGL selection mode in particular consideration of the specialties in this field such as irregularity in geometry and complexity in material etc. The software developed has functions of 3D model construction and visualization, real-time modeling transformation, information interaction and so on. It could serve as useful platform for 3D visualization in biomedical engineering research.

  18. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  19. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  20. FPV: fast protein visualization using Java 3D.

    PubMed

    Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen

    2003-05-22

    Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/

  1. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  2. 3D visualization of middle ear structures

    NASA Astrophysics Data System (ADS)

    Vogel, Uwe; Schmitt, Thomas

    1998-06-01

    application of a micro- tomographic imaging device. Therefore an X-ray beam focused down to few microns passes the object in a tomographic arrangement. Subsequently the slices become reconstructed. Generally spatial resolution down to 10 micrometer may be obtained by using this procedure. But there exist few devices only, it is not available as standard equipment. The best results concerning spatial resolution should be achieved by applying conventional histologic sectioning techniques. Of course the target will become destroyed during the procedure. It is cut into sections (e.g., 10 micrometer thick), every layer is stained, and the image acquired and stored by a digital still-camera with appropriate resolution (e.g., 2024 X 3036). Three-dimensional reconstruction is done with the computer. The staining allows visual selection of bones and soft tissues, resolutions down to 10 micrometer are possible without target segmentation. But there arise some practical problems. Mainly the geometric context of the layers is affected by the cutting procedure, especially if cutting bone. Another problem performs the adjustment of the -- possibly distorted -- slices to each other. Artificial markers are necessary, which could allow automatic adjustment too. But the introduction and imaging of the markers is difficult inside the temporal bone specimen, that is interspersed by several cavities. Of course the internal target structures must not be destroyed by the marker introduction. Furthermore the embedding compound could disturb the image acquisition, e.g., by optical scattering of paraffin. A related alternative is given by layered ablation/grinding and imaging of the top layer. This saves the geometric consistency, but requires very tricky and time-consuming embedding procedures. Both approaches require considerable expenditures. The possible approaches are evaluated in detail and first results are compared. So far none of the above-mentioned procedures has been established as a

  3. The Impact of 3D Stacking and Technology Scaling on the Power and Area of Stereo Matching Processors

    PubMed Central

    Ok, Seung-Ho; Lee, Yong-Hwan; Shim, Jae Hoon; Lim, Sung Kyu; Moon, Byungin

    2017-01-01

    Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs. PMID:28241437

  4. The Impact of 3D Stacking and Technology Scaling on the Power and Area of Stereo Matching Processors.

    PubMed

    Ok, Seung-Ho; Lee, Yong-Hwan; Shim, Jae Hoon; Lim, Sung Kyu; Moon, Byungin

    2017-02-22

    Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs.

  5. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  6. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.

  7. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  8. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope.

    PubMed

    Gong, Yuanzheng; Johnston, Richard S; Melville, C David; Seibel, Eric J

    As the rapid progress in the development of optoelectronic components and computational power, 3D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This paper proposed a new approach to measure tiny internal 3D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.

  9. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope

    PubMed Central

    Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.

    2015-01-01

    As the rapid progress in the development of optoelectronic components and computational power, 3D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This paper proposed a new approach to measure tiny internal 3D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm. PMID:26640425

  10. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  11. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  12. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  13. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  14. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  15. Automatic visualization of 3D geometry contained in online databases

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; John, Nigel W.

    2003-04-01

    In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.

  16. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  17. 3D localization of a labeled target by means of a stereo vision configuration with subvoxel resolution.

    PubMed

    Arias H, Néstor A; Sandoz, Patrick; Meneses, Jaime E; Suarez, Miguel A; Gharbi, Tijani

    2010-11-08

    We present a method for the visual measurement of the 3D position and orientation of a moving target. Three dimensional sensing is based on stereo vision while high resolution results from a pseudo-periodic pattern (PPP) fixed onto the target. The PPP is suited for optimizing image processing that is based on phase computations. We describe experimental setup, image processing and system calibration. Resolutions reported are in the micrometer range for target position (x,y,z) and of 5:3x10(-4) rad: for target orientation (θx,θy,θz). These performances have to be appreciated with respect to the vision system used. The latter makes that every image pixel corresponds to an actual distance of 0:3x0:3 mm2 on the target while the PPP is made of elementary dots of 1 mm with a period of 2 mm. Target tilts as large as π=4 are allowed with respect to the Z axis of the system.

  18. PC-based stereo visualization tools for aviation virtual-reality projects

    NASA Astrophysics Data System (ADS)

    Stepanov, Alexander A.; Zheltov, Sergey Y.; Kiryakov, Konstantin R.; Invalev, Alexander I.; Boltunov, Anatoly V.

    1997-07-01

    Virtual reality is a new way to enhance the human interaction with a simulator or another man-machine system. Synthetic environment of virtual reality provides new possibilities of human activity due to the use of various sensor channels. Stereo visualization provides human immersion to virtual space and is the important feature of virtual reality. The investigations of technical and programming tools for stereo visualization of highly accurate and detailed 3D models of objects and the terrain with geo-specifically placed objects like buildings, roads, forests and other special landmarks are discussed. Hardware includes liquid crystal shutter glasses and Intel Pentium computer with standard monitor. Use of original photogrammetric and rendering software under MS Windows provides very realistic `walk-through' and `fly-over' simulations. These tools are cheaper than ones oriented to powerful workstations. The examples of animations and virtual spaces with designed for airplane pilot training 3D site models of real scenes are demonstrated.

  19. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.

  20. An Automatic 3d Reconstruction Method Based on Multi-View Stereo Vision for the Mogao Grottoes

    NASA Astrophysics Data System (ADS)

    Xiong, J.; Zhong, S.; Zheng, L.

    2015-05-01

    This paper presents an automatic three-dimensional reconstruction method based on multi-view stereo vision for the Mogao Grottoes. 3D digitization technique has been used in cultural heritage conservation and replication over the past decade, especially the methods based on binocular stereo vision. However, mismatched points are inevitable in traditional binocular stereo matching due to repeatable or similar features of binocular images. In order to reduce the probability of mismatching greatly and improve the measure precision, a portable four-camera photographic measurement system is used for 3D modelling of a scene. Four cameras of the measurement system form six binocular systems with baselines of different lengths to add extra matching constraints and offer multiple measurements. Matching error based on epipolar constraint is introduced to remove the mismatched points. Finally, an accurate point cloud can be generated by multi-images matching and sub-pixel interpolation. Delaunay triangulation and texture mapping are performed to obtain the 3D model of a scene. The method has been tested on 3D reconstruction several scenes of the Mogao Grottoes and good results verify the effectiveness of the method.

  1. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  2. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  3. Computational model of stereoscopic 3D visual saliency.

    PubMed

    Wang, Junle; Da Silva, Matthieu Perreira; Le Callet, Patrick; Ricordel, Vincent

    2013-06-01

    Many computational models of visual attention performing well in predicting salient areas of 2D images have been proposed in the literature. The emerging applications of stereoscopic 3D display bring an additional depth of information affecting the human viewing behavior, and require extensions of the efforts made in 2D visual modeling. In this paper, we propose a new computational model of visual attention for stereoscopic 3D still images. Apart from detecting salient areas based on 2D visual features, the proposed model takes depth as an additional visual dimension. The measure of depth saliency is derived from the eye movement data obtained from an eye-tracking experiment using synthetic stimuli. Two different ways of integrating depth information in the modeling of 3D visual attention are then proposed and examined. For the performance evaluation of 3D visual attention models, we have created an eye-tracking database, which contains stereoscopic images of natural content and is publicly available, along with this paper. The proposed model gives a good performance, compared to that of state-of-the-art 2D models on 2D images. The results also suggest that a better performance is obtained when depth information is taken into account through the creation of a depth saliency map, rather than when it is integrated by a weighting method.

  4. Visual fatigue while watching 3D stimuli from different positions.

    PubMed

    Aznar-Casanova, J Antonio; Romeo, August; Gómez, Aurora Torrents; Enrile, Pedro Martin

    When observers focus their stereoscopic visual system for a long time (e.g., watching a 3D movie) they may experience visual discomfort or asthenopia. We tested two types of models for predicting visual fatigue in a task in which subjects were instructed to discriminate between 3D characters. One model was based on viewing distance (focal distance, vergence distance) and another in visual direction (oculomotor imbalance). A 3D test was designed to assess binocular visual fatigue while looking at 3D stimuli located in different visual directions and viewed from two distances from the screen. The observers were tested under three conditions: (a) normal vision; (b) wearing a lens (-2 diop.); (c) wearing a base-out prism (2▿) over each eye. Sensitivity and specificity were calculated (as Signal Detection Theory parameters: SDT). An ANOVA and SDT analyses revealed that impaired visual performance were directly related to short distance and larger deviation in visual direction, particularly when the stimuli were located nearer and at more than 24° to the centre of the screen in dextroversion and beyond. This results support a mixed model, combining a model based on the visual angle (related to viewing distance) and another based on the oculomotor imbalance (related to visual direction). This mixed model could help to predict the distribution of seats in the cinema room ranging from those that produce greater visual comfort to those that produce more visual discomfort. Also could be a first step to pre-diagnosis of binocular vision disorders. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  5. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  6. The 3-D Moons: The Voyager stereo atlas of the outer solar system

    NASA Technical Reports Server (NTRS)

    Schenk, P.; Moore, J. M.

    1993-01-01

    Comprehension and analysis of geologic features on any planet is enhanced manyfold by a clear perception between albedo and topography. On many of the icy satellites significant albedo contrasts due to mixtures of dark rocky and bright icy materials can be associated with topographic features. Subtle topographic features can be masked by albedo variation and under high solar illumination albedo and topography can be difficult to separate. To this end we are compiling an atlas of stereo image pairs of the outer solar system based on Voyager imaging for the investigation of various geologic problems and for general use. For the icy satellites, general perceptions of topography are usually gleaned from shape-from-shading information in the images processed by the human brain (i.e. visual inspection). With few exceptions, actual topography has been measured on a spot-by-spot basis using shadow heights or photoclinometry, or along limb profiles (where geographic context may be unavailable). Shadow heights are limited to regions within approximately 10 deg of the terminator and images with resolution better than approximately 1 km/pixel. Photoclinometric scans can be used more widely but are subject to a variety of errors, primarily uncertain assumptions of uniform scene albedo or poorly understood photometry. Stereoscopic analysis, where available, has the potential for greatly expanding topographic perception.

  7. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  8. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  9. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  10. 3D web visualization of huge CityGML models

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Devigili, F.; Soave, M.; Di Staso, U.; De Amicis, R.

    2015-08-01

    Nowadays, rapid technological development into acquiring geo-spatial information; joined to the capabilities to process these data in a relative short period of time, allows the generation of detailed 3D textured city models that will become an essential part of the modern city information infrastructure (Spatial Data Infrastructure) and, can be used to integrate various data from different sources for public accessible visualisation and many other applications. One of the main bottlenecks, which at the moment limit the use of these datasets to few experts, is a lack on efficient visualization systems through the web and interoperable frameworks that allow standardising the access to the city models. The work presented in this paper tries to satisfy these two requirements developing a 3D web-based visualization system based on OGC standards and effective visualization concepts. The architectural framework, based on Services Oriented Architecture (SOA) concepts, provides the 3D city data to a web client designed to support the view process in a very effective way. The first part of the work is to design a framework compliant to the 3D Portrayal Service drafted by the of the Open Geospatial Consortium (OGC) 3D standardization working group. The latter is related to the development of an effective web client able to render in an efficient way the 3D city models.

  11. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  12. 3D visualization of the human cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Zrimec, Tatjana; Mander, Tom; Lambert, Timothy; Parker, Geoffrey

    1995-04-01

    Computer assisted 3D visualization of the human cerebro-vascular system can help to locate blood vessels during diagnosis and to approach them during treatment. Our aim is to reconstruct the human cerebro-vascular system from the partial information collected from a variety of medical imaging instruments and to generate a 3D graphical representation. This paper describes a tool developed for 3D visualization of cerebro-vascular structures. It also describes a symbolic approach to modeling vascular anatomy. The tool, called Ispline, is used to display the graphical information stored in a symbolic model of the vasculature. The vascular model was developed to assist image processing and image fusion. The model consists of a structural symbolic representation using frames and a geometrical representation of vessel shapes and vessel topology. Ispline has proved to be useful for visualizing both the synthetically constructed vessels of the symbolic model and the vessels extracted from a patient's MR angiograms.

  13. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  14. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  15. Dynamic 3-D visualization of vocal tract shaping during speech.

    PubMed

    Zhu, Yinghua; Kim, Yoon-Chul; Proctor, Michael I; Narayanan, Shrikanth S; Nayak, Krishna S

    2013-05-01

    Noninvasive imaging is widely used in speech research as a means to investigate the shaping and dynamics of the vocal tract during speech production. 3-D dynamic MRI would be a major advance, as it would provide 3-D dynamic visualization of the entire vocal tract. We present a novel method for the creation of 3-D dynamic movies of vocal tract shaping based on the acquisition of 2-D dynamic data from parallel slices and temporal alignment of the image sequences using audio information. Multiple sagittal 2-D real-time movies with synchronized audio recordings are acquired for English vowel-consonant-vowel stimuli /ala/, /a.ιa/, /asa/, and /a∫a/. Audio data are aligned using mel-frequency cepstral coefficients (MFCC) extracted from windowed intervals of the speech signal. Sagittal image sequences acquired from all slices are then aligned using dynamic time warping (DTW). The aligned image sequences enable dynamic 3-D visualization by creating synthesized movies of the moving airway in the coronal planes, visualizing desired tissue surfaces and tube-shaped vocal tract airway after manual segmentation of targeted articulators and smoothing. The resulting volumes allow for dynamic 3-D visualization of salient aspects of lingual articulation, including the formation of tongue grooves and sublingual cavities, with a temporal resolution of 78 ms.

  16. Structural stereo matching of Laplacian-of-Gaussian contour segments for 3D perception

    NASA Technical Reports Server (NTRS)

    Boyer, K. L.; Sotak, G. E., Jr.

    1989-01-01

    The stereo correspondence problem is solved using Laplacian-of-Gaussian zero-crossing contours as a source of primitives for structural stereopsis, as opposed to traditional point-based algorithms. Up to 74 percent matching of candidate zero crossing points are being achieved on 240 x 246 images at small scales and large ranges of disparity, without coarse-to-fine tracking and without precise knowledge of the epipolar geometry. This approach should prove particularly useful in recovering the epipolar geometry automatically for stereo pairs for which it is unavailable a priori. Such situations occur in the extraction of terrain models from stereo aerial photographs.

  17. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  18. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    SciTech Connect

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-09-15

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  19. Enhanced visualization of angiograms using 3D models

    NASA Astrophysics Data System (ADS)

    Marovic, Branko S.; Duckwiler, Gary R.; Villablanca, Pablo; Valentino, Daniel J.

    1999-05-01

    The 3D visualization of intracranial vasculature can facilitate the planning of endovascular therapy and the evaluation of interventional result. To create 3D visualizations, volumetric datasets from x-ray computed tomography angiography (CTA) and magnetic resonance angiography (MRA) are commonly rendered using maximum intensity projection (MIP), volume rendering, or surface rendering techniques. However, small aneurysms and mild stenoses are very difficult to detect using these methods. Furthermore, the instruments used during endovascular embolization or surgical treatment produce artifacts that typically make post-intervention CTA inapplicable, and the presence of magnetic material prohibits the use of MRA. Therefore, standard digital angiography is typically used. In order to address these problems, we developed a visualization and modeling system that displays 2D and 3D angiographic images using a simple Web-based interface. Polygonal models of vasculature were generated from CT and MR data using 3D segmentation of bones and vessels and polygonal surface extraction and simplification. A web-based 3D environment was developed for interactive examination of reconstructed surface models, creation of oblique cross- sections and maximum intensity projections, and distance measurements and annotations. This environment uses a multi- tier client/server approach employing VRML and Java. The 3D surface model and angiographic images can be aligned and displayed simultaneously to permit better perception of complex vasculature and to determine optical viewing positions and angles before starting an angiographic sessions. Polygonal surface reconstruction allows interactive display of complex spatial structures on inexpensive platforms such as personal computers as well as graphic workstations. The aneurysm assessment procedure demonstrated the utility of web-based technology for clinical visualization. The resulting system facilitated the treatment of serious vascular

  20. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  1. 3D reconstruction, visualization, and measurement of MRI images

    NASA Astrophysics Data System (ADS)

    Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap

    1999-03-01

    This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.

  2. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  3. X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.

    PubMed

    Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young

    2016-04-01

    In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.

  4. Measuring the Visual Salience of 3D Printed Objects.

    PubMed

    Wang, Xi; Lindlbauer, David; Lessig, Christian; Maertens, Marianne; Alexa, Marc

    2016-01-01

    To investigate human viewing behavior on physical realizations of 3D objects, the authors use an eye tracker with scene camera and fiducial markers on 3D objects to gather fixations on the presented stimuli. They use this data to validate assumptions regarding visual saliency that so far have experimentally only been analyzed for flat stimuli. They provide a way to compare fixation sequences from different subjects and developed a model for generating test sequences of fixations unrelated to the stimuli. Their results suggest that human observers agree in their fixations for the same object under similar viewing conditions. They also developed a simple procedure to validate computational models for visual saliency of 3D objects and found that popular models of mesh saliency based on center surround patterns fail to predict fixations.

  5. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  6. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  7. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  8. 3D scientific visualization of reservoir simulation post-processing

    SciTech Connect

    Sousa, M.C.; Miranda-Filho, D.N.

    1994-12-31

    This paper describes a 3D visualization software designed at PETROBRAS and TecGraf/PUC-RJ in Brazil for the analysis of reservoir engineering post-processing data. It offers an advanced functional environment on graphical workstations with intuitive and ergonomic interface. Applications to real reservoir models show the enriching features of the software.

  9. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  10. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  11. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  12. An AR system with intuitive user interface for manipulation and visualization of 3D medical data.

    PubMed

    Vogt, Sebastian; Khamene, Ali; Niemann, Heinrich; Sauer, Frank

    2004-01-01

    We report on a stereoscopic video-see-through augmented reality system which we developed for medical applications. Our system allows interactive in-situ visualization of 3D medical imaging data. For high-quality rendering of the augmented scene we utilize the capabilities of the latest graphics card generations. Fast high-precision MPR generation ("multiplanar reconstruction") and volume rendering is realized with OpenGL 3D textures. We provide a tracked hand-held tool to interact with the medical imaging data in its actual location. This tool is represented as a virtual tool in the space of the medical data. The user can assign different functionality to it: select arbitrary MPR cross-sections, guide a local volume rendered cube through the medical data, change the transfer function, etc. Tracking works in conjunction with retroreflective markers, which frame the workspace for head tracking respectively are attached to instruments for tool tracking. We use a single head-mounted tracking camera, which is rigidly fixed to the stereo pair of cameras that provide the live video view of the real scene. The user's spatial perception is based on stereo depth cues as well as on the kinetic depth cues that he receives with the viewpoint variations and the interactive data visualization. The AR system has a compelling real-time performance with 30 stereo-frames/second and exhibits no time lag between the video images and the augmenting graphics. Thus, the physician can interactively explore the medical imaging information in-situ.

  13. 3-D Visualization on Workspace of Parallel Manipulators

    NASA Astrophysics Data System (ADS)

    Tanaka, Yoshito; Yokomichi, Isao; Ishii, Junko; Makino, Toshiaki

    In parallel mechanisms, the form and volume of workspace also change variously with the attitude of a platform. This paper presents a method to search for the workspace of parallel mechanisms with 6-DOF and 3D visualization of the workspace. Workspace is a search for the movable range of the central point of a platform when it moves with a given orientation. In order to search workspace, geometric analysis based on inverse kinematics is considered. Plots of 2D of calculations are compared with those measured by position sensors. The test results are shown to have good agreement with simulation results. The workspace variations are demonstrated in terms of 3D and 2D plots for prototype mechanisms. The workspace plots are created with OpenGL and Visual C++ by implementation of the algorithm. An application module is developed, which displays workspace of the mechanism in 3D images. The effectiveness and practicability of 3D visualization on workspace are successfully demonstrated by 6-DOF parallel mechanisms.

  14. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  15. Symbolic processing methods for 3D visual processing

    NASA Astrophysics Data System (ADS)

    Tedder, Maurice; Hall, Ernest L.

    2001-10-01

    The purpose of this paper is to describe a theory that defines an open method for solving 3D visual data processing and artificial intelligence problems that is independent of hardware or software implementation. The goal of the theory is to generalize and abstract the process of 3D visual processing so that the method can be applied to a wide variety of 3D visual processing problems. Once the theory is described a heuristic derivation is given. Symbolic processing methods can be generalized into an abstract model composed of eight basic components. The symbolic processing model components are: input data; input data interface; symbolic data library; symbolic data environment space; relationship matrix; symbolic logic driver; output data interface and output data. An obstacle detection and avoidance experiment was constructed to demonstrate the symbolic processing method. The results of the robot obstacle avoidance experiment demonstrated that the mobile robot could successfully navigate the obstacle course using symbolic processing methods for the control software. The significance of the symbolic processing approach is that the method arrived at a solution by using a more formal quantifiable process. Some of the practical applications for this theory are: 3D object recognition, obstacle avoidance, and intelligent robot control.

  16. A new visualization method for 3D head MRA data

    NASA Astrophysics Data System (ADS)

    Ohashi, Satoshi; Hatanaka, Masahiko

    2008-03-01

    In this paper, we propose a new visualization method for head MRA data which supports the user to easily determine the positioning of MPR images and/or MIP images based on the blood vessel network structure (the anatomic location of blood vessels). This visualization method has following features: (a) the blood vessel (cerebral artery) network structure in 3D head MRA data is portrayed the 3D line structure; (b) the MPR or MIP images are combined with the blood vessel network structure and displayed in a 3D visualization space; (c) the positioning of MPR or MIP is decided based on the anatomic location of blood vessels; (d) The image processing and drawing can be operated at real-time without a special hardware accelerator. As a result, we believe that our method is available to position MPR images or MIP images related to the blood vessel network structure. Moreover, we think that the user using this method can obtain the 3D information (position, angle, direction) of both these images and the blood vessel network structure.

  17. A Stereo Vision Visualization Method in Welding

    NASA Astrophysics Data System (ADS)

    Zhao, Chuangxin; Richardson, Ian M.; Kleijn, Chris; Kenjeres, Sasa; Saldi, Zaki

    2008-09-01

    The oscillation of weld pool surface, vaporization and spatters make the measurement in welding difficult; two dimensional results can not reflect enough information in welding. However, there are few direct three dimensional methods to understand the fluid flow during welding. In this paper, we described a three dimensional reconstruction method to measure velocity in welding based on a single high speed camera. A stereo adapter was added in front of the high speed camera lens to obtain two images in the same frame from different view points at the same time; according to the machine vision theory, three dimensional parameters could be reconstructed based on these two images

  18. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  19. A methodology for visually lossless JPEG2000 compression of monochrome stereo images.

    PubMed

    Feng, Hsin-Chang; Marcellin, Michael W; Bilgin, Ali

    2015-02-01

    A methodology for visually lossless compression of monochrome stereoscopic 3D images is proposed. Visibility thresholds are measured for quantization distortion in JPEG2000. These thresholds are found to be functions of not only spatial frequency, but also of wavelet coefficient variance, as well as the gray level in both the left and right images. To avoid a daunting number of measurements during subjective experiments, a model for visibility thresholds is developed. The left image and right image of a stereo pair are then compressed jointly using the visibility thresholds obtained from the proposed model to ensure that quantization errors in each image are imperceptible to both eyes. This methodology is then demonstrated via a particular 3D stereoscopic display system with an associated viewing condition. The resulting images are visually lossless when displayed individually as 2D images, and also when displayed in stereoscopic 3D mode.

  20. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  1. 3D visualization of gene clusters and networks

    NASA Astrophysics Data System (ADS)

    Zhang, Leishi; Sheng, Weiguo; Liu, Xiaohui

    2005-03-01

    In this paper, we try to provide a global view of DNA microarray gene expression data analysis and modeling process by combining novel and effective visualization techniques with data mining algorithms. An integrated framework has been proposed to model and visualize short, high-dimensional gene expression data. The framework reduces the dimensionality of variables before applying appropriate temporal modeling method. Prototype has been built using Java3D to visualize the framework. The prototype takes gene expression data as input, clusters the genes, displays the clustering results using a novel graph layout algorithm, models individual gene clusters using Dynamic Bayesian Network and then visualizes the modeling results using simple but effective visualization techniques.

  2. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-12-19

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.

  3. CMap3D: a 3D visualization tool for comparative genetic maps.

    PubMed

    Duran, Chris; Boskovic, Zoran; Imelfort, Michael; Batley, Jacqueline; Hamilton, Nicholas A; Edwards, David

    2010-01-15

    Genetic linkage mapping enables the study of genome organization and the association of heritable traits with regions of sequenced genomes. Comparative genetic mapping is particularly powerful as it allows translation of information between related genomes and gives an insight into genome evolution. A common tool for the storage, comparison and visualization of genetic maps is CMap. However, current visualization in CMap is limited to the comparison of adjacent aligned maps. To overcome this limitation, we have developed CMap3D, a tool to compare multiple genetic maps in three-dimensional space. CMap3D is based on a client/server model ensuring operability with current CMap data repositories. This tool can be applied to any species where genetic map information is available and enables rapid, direct comparison between multiple aligned maps. The software is a stand-alone application written in Processing and Java. Binaries are available for Windows, OSX and Linux, and require Sun Microsystems Java Runtime Environment 1.6 or later. The software is freely available for non-commercial use from http://flora.acpfg.com.au/.

  4. Toward mobile 3D visualization for structural biologists.

    PubMed

    Tanramluk, Duangrudee; Akavipat, Ruj; Charoensawan, Varodom

    2013-12-01

    Technological advances in crystallography have led to the ever-rapidly increasing number of biomolecular structures deposited in public repertoires. This undoubtedly shifts the bottleneck of structural biology research from obtaining high-quality structures to data analysis and interpretation. The recently available glasses-free autostereoscopic laptop offers an unprecedented opportunity to visualize and study 3D structures using a much more affordable, and for the first time, portable device. Together with a gamepad re-programmed for 3D structure controlling, we describe how the gaming technologies can deliver the output 3D images for high-quality viewing, comparable to that of a passive stereoscopic system, and can give the user more control and flexibility than the conventional controlling setup using only a mouse and a keyboard.

  5. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  6. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  7. Visual odometry based on structural matching of local invariant features using stereo camera sensor.

    PubMed

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields.

  8. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  9. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  10. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  11. A generalized 3D framework for visualization of planetary data.

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.

    2016-12-01

    As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.

  12. 3-D visualization and animation technologies in anatomical imaging.

    PubMed

    McGhee, John

    2010-02-01

    This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.

  13. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  14. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  15. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  16. 3D Evolution of a Filament Disappearance Event Observed by STEREO

    NASA Astrophysics Data System (ADS)

    Gosain, S.; Schmieder, B.; Venkatakrishnan, P.; Chandra, R.; Artzner, G.

    2009-10-01

    A filament disappearance event was observed on 22 May 2008 during our recent campaign JOP 178. The filament, situated in the Southern Hemisphere, showed sinistral chirality consistent with the hemispheric rule. The event was well observed by several observatories, in particular by THEMIS. One day, before the disappearance, Hα observations showed up- and down-flows in adjacent locations along the filament, which suggest plasma motions along twisted flux rope. THEMIS and GONG observations show shearing photospheric motions leading to magnetic flux canceling around barbs. STEREO A, B spacecraft with separation angle 52.4°, showed quite different views of this untwisting flux rope in He ii 304 Å images. Here, we reconstruct the three-dimensional geometry of the filament during its eruption phase using STEREO EUV He ii 304 Å images and find that the filament was highly inclined to the solar normal. The He ii 304 Å movies show individual threads, which oscillate and rise to an altitude of about 120 Mm with apparent velocities of about 100 km s-1 during the rapid evolution phase. Finally, as the flux rope expands into the corona, the filament disappears by becoming optically thin to undetectable levels. No CME was detected by STEREO, only a faint CME was recorded by LASCO at the beginning of the disappearance phase at 02:00 UT, which could be due to partial filament eruption. Further, STEREO Fe xii 195 Å images showed bright loops beneath the filament prior to the disappearance phase, suggesting magnetic reconnection below the flux rope.

  17. Virtual reality and 3D visualizations in heart surgery education.

    PubMed

    Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver

    2002-01-01

    Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.

  18. Breast tumour visualization using 3D quantitative ultrasound methods

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Raheem, Abdul; Tadayyon, Hadi; Liu, Simon; Hadizad, Farnoosh; Czarnota, Gregory J.

    2016-04-01

    Breast cancer is one of the most common cancer types accounting for 29% of all cancer cases. Early detection and treatment has a crucial impact on improving the survival of affected patients. Ultrasound (US) is non-ionizing, portable, inexpensive, and real-time imaging modality for screening and quantifying breast cancer. Due to these attractive attributes, the last decade has witnessed many studies on using quantitative ultrasound (QUS) methods in tissue characterization. However, these studies have mainly been limited to 2-D QUS methods using hand-held US (HHUS) scanners. With the availability of automated breast ultrasound (ABUS) technology, this study is the first to develop 3-D QUS methods for the ABUS visualization of breast tumours. Using an ABUS system, unlike the manual 2-D HHUS device, the whole patient's breast was scanned in an automated manner. The acquired frames were subsequently examined and a region of interest (ROI) was selected in each frame where tumour was identified. Standard 2-D QUS methods were used to compute spectral and backscatter coefficient (BSC) parametric maps on the selected ROIs. Next, the computed 2-D parameters were mapped to a Cartesian 3-D space, interpolated, and rendered to provide a transparent color-coded visualization of the entire breast tumour. Such 3-D visualization can potentially be used for further analysis of the breast tumours in terms of their size and extension. Moreover, the 3-D volumetric scans can be used for tissue characterization and the categorization of breast tumours as benign or malignant by quantifying the computed parametric maps over the whole tumour volume.

  19. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  20. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  1. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  2. Using 3D Interactive Visualizations In Teacher Workshops

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Cooper, I.; de Groot, R.; Shindle, W.; Mellors, R.; Benthien, M.

    2004-12-01

    Extending Earth Science learning activities from 2D to 3D was central to this year's second annual Teacher Education Workshop, which was held at the Scripps Institution of Oceanography's Visualization Center (SIO VizCenter; http://siovizcenter.ucsd.edu/). Educational specialists and researchers from several institutions led this collaborative workshop , which was supported by the Southern California Earthquake Center (SCEC; http://www.scec.org/education), the U.S. Geological Survey (USGS), the SIO VizCenter, San Diego State University (SDSU) and the Incorporated Research Institutions for Seismology (IRIS). The workshop was the latest in a series of teacher workshops run by SCEC and the USGS with a focus on earthquakes and seismic hazard. A particular emphasis of the 2004 workshop was the use of sophisticated computer visualizations that easily illustrated geospatial relationships. These visualizations were displayed on a large wall-sized curved screen, which allowed the workshop participants to be literally immersed in the images being discussed. In this way, the teachers explored current geoscience datasets in a novel and interactive fashion, which increased their understanding of basic concepts relevant to the national science education standards and alleviated some of their misconceptions. For example, earthquake hypocenter data were viewed in interactive 3D and the teachers immediately understood that: (1) The faults outlined by the earthquake locations are 3D planes, not 2D lines; (2) The earthquakes map out plate tectonic boundaries, where the 3D structure of some boundaries are more complex than others; (3) The deepest earthquakes occur in subduction zones, whereas transform and divergent plate boundaries tend to have shallower quakes. A major advantage is that these concepts are immediately visible in 3D and do not require elaborate explanations, as is often necessary with traditional 2D maps. This enhances the teachers' understanding in an efficient and

  3. 3D-printer visualization of neuron models

    PubMed Central

    McDougal, Robert A.; Shepherd, Gordon M.

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases. PMID:26175684

  4. 3D-printer visualization of neuron models.

    PubMed

    McDougal, Robert A; Shepherd, Gordon M

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  5. Advances in 3D visualization of air quality data

    NASA Astrophysics Data System (ADS)

    San José, R.; Pérez, J. L.; González, R. M.

    2012-10-01

    The air quality models produce a considerable amount of data, raw data can be hard to conceptualize, particularly when the size of the data sets can be terabytes, so to understand the atmospheric processes and consequences of air pollution it is necessary to analyse the results of the air pollution simulations. The basis of the development of the visualization is shaped by the requirements of the different group of users. We show different possibilities to represent 3D atmospheric data and geographic data. We present several examples developed with IDV software, which is a generic tool that can be used directly with the simulation results. The rest of solutions are specific applications developed by the authors which are the integration of different tools and technologies. In the case of the buildings has been necessary to make a 3D model from the buildings data using COLLADA standard format. In case of the Google Earth approach, for the atmospheric part we use Ferret software. In the case of gvSIG.-3D for the atmospheric visualization we have used different geometric figures available: "QuadPoints", "Polylines", "Spheres" and isosurfaces. The last one is also displayed following the VRML standard.

  6. Comparative visual analysis of 3D urban wind simulations

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Salim, Mohamed; Grawe, David; Leitl, Bernd; Böttinger, Michael; Schlünzen, Heinke

    2016-04-01

    Climate simulations are conducted in large quantity for a variety of different applications. Many of these simulations focus on global developments and study the Earth's climate system using a coupled atmosphere ocean model. Other simulations are performed on much smaller regional scales, to study very small fine grained climatic effects. These microscale climate simulations pose similar, yet also different, challenges for the visualization and the analysis of the simulation data. Modern interactive visualization and data analysis techniques are very powerful tools to assist the researcher in answering and communicating complex research questions. This presentation discusses comparative visualization for several different wind simulations, which were created using the microscale climate model MITRAS. The simulations differ in wind direction and speed, but are all centered on the same simulation domain: An area of Hamburg-Wilhelmsburg that hosted the IGA/IBA exhibition in 2013. The experiments contain a scenario case to analyze the effects of single buildings, as well as examine the impact of the Coriolis force within the simulation. The scenario case is additionally compared with real measurements from a wind tunnel experiment to ascertain the accuracy of the simulation and the model itself. We also compare different approaches for tree modeling and evaluate the stability of the model. In this presentation, we describe not only our workflow to efficiently and effectively visualize microscale climate simulation data using common 3D visualization and data analysis techniques, but also discuss how to compare variations of a simulation and how to highlight the subtle differences in between them. For the visualizations we use a range of different 3D tools that feature techniques for statistical data analysis, data selection, as well as linking and brushing.

  7. A wellness platform for stereoscopic 3D video systems using EEG-based visual discomfort evaluation technology.

    PubMed

    Kang, Min-Koo; Cho, Hohyun; Park, Han-Mu; Jun, Sung Chan; Yoon, Kuk-Jin

    2017-07-01

    Recent advances in three-dimensional (3D) video technology have extended the range of our experience while providing various 3D applications to our everyday life. Nevertheless, the so-called visual discomfort (VD) problem inevitably degrades the quality of experience in stereoscopic 3D (S3D) displays. Meanwhile, electroencephalography (EEG) has been regarded as one of the most promising brain imaging modalities in the field of cognitive neuroscience. In an effort to facilitate comfort with S3D displays, we propose a new wellness platform using EEG. We first reveal features in EEG signals that are applicable to practical S3D video systems as an index for VD perception. We then develop a framework that can automatically determine severe perception of VD based on the EEG features during S3D video viewing by capitalizing on machine-learning-based braincomputer interface technology. The proposed platform can cooperate with advanced S3D video systems whose stereo baseline is adjustable. Thus, the optimal S3D content can be reconstructed according to a viewer's sensation of VD. Applications of the proposed platform to various S3D industries are suggested, and further technical challenges are discussed for follow-up research.

  8. Visualization and Analysis of 3D Gene Expression Data

    SciTech Connect

    Bethel, E. Wes; Rubel, Oliver; Weber, Gunther H.; Hamann, Bernd; Hagen, Hans

    2007-10-25

    Recent methods for extracting precise measurements ofspatial gene expression patterns from three-dimensional (3D) image dataopens the way for new analysis of the complex gene regulatory networkscontrolling animal development. To support analysis of this novel andhighly complex data we developed PointCloudXplore (PCX), an integratedvisualization framework that supports dedicated multi-modal, physical andinformation visualization views along with algorithms to aid in analyzingthe relationships between gene expression levels. Using PCX, we helpedour science stakeholders to address many questions in 3D gene expressionresearch, e.g., to objectively define spatial pattern boundaries andtemporal profiles of genes and to analyze how mRNA patterns arecontrolled by their regulatory transcription factors.

  9. Structural Stereo Matching Of Laplacian-Of-Gaussian Contour Segments For 3D Perception

    NASA Astrophysics Data System (ADS)

    Boyer, K. L.; Sotak, G. E.

    1989-03-01

    We solve the stereo correspondence problem using Lapla-cian of Gaussian (LoG) zero-crossing contours as a source of primitives for structural stereopsis, as opposed to traditional point-based algorithms. For each image in the stereo pair, we apply the LoG operator, extract and link zero crossing points, filter and segment the contours into meaningful primitives, and compute a parametric structural description over the resulting primitive set. We then apply a variant of the inexact structural matching technique of Boyer and Kak Ill to recover the optimal interprimitive mapping (correspon-dence) function. Since an extended image feature conveys more information than a single point, its spatial and photometric behavior may be exploited to advantage; there are also fewer features to match, resulting in a smaller combinatorial problem. The structural approach allows greater use of spatial relational constraints, which allows us to eliminate (or reduce) the coarse-to-fine tracking of most point-based algorithms. Solving the correspondence problem at this level requires only an approximate probabilistic characterization of the image-to-image structural distortion, and does not require detailed knowledge of the epipolar geometry.

  10. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  11. Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Vienne, Cyril; Blondé, Laurent

    2013-03-01

    Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers' visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer's attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.

  12. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  13. The zone of comfort: Predicting visual discomfort with stereo displays.

    PubMed

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M; Banks, Martin S

    2011-07-21

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence-accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence-accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema.

  14. The zone of comfort: Predicting visual discomfort with stereo displays

    PubMed Central

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  15. Interactive 3D visualization speeds well, reservoir planning

    SciTech Connect

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinite reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.

  16. 3D diffraction tomography for visualization of contrast media

    NASA Astrophysics Data System (ADS)

    Pai, Vinay M.; Stein, Ashley; Kozlowski, Megan; George, Ashvin; Kopace, Rael; Bennett, Eric; Auxier, Julie A.; Wen, Han

    2011-03-01

    In x-ray CT, the ability to selectively isolate a contrast agent signal from the surrounding soft tissue and bone can greatly enhance contrast visibility and enable quantification of contrast concentration. We present here a 3D diffraction tomography implementation for selectively retaining volumetric diffraction signal from contrast agent particles that are within a banded size range while suppressing the background signal from soft tissue and bone. For this purpose, we developed a CT implementation of a single-shot x-ray diffraction imaging technique utilizing gratings. This technique yields both diffraction and absorption images from a single grating-modulated projection image through analysis in the spatial frequency domain. A solution of iron oxide nano-particles, having very different x-ray diffraction properties from tissue, was injected into ex vivo chicken wing and in vivo rat specimens respectively and imaged in a 3D diffraction CT setup. Following parallel beam reconstruction, it is noted that while the soft tissue, bone and contrast media are observed in the absorption volume reconstruction, only the contrast media is observed in the diffraction volume reconstruction. This 3D diffraction tomographic reconstruction permits the visualization and quantification of the contrast agent isolated from the soft tissue and bone background.

  17. (Almost) Featureless Stereo: Calibration and Dense 3D Reconstruction Using Whole Image Operations

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Morris, R. D.; Maluf, D. A.; Cheeseman, P.

    2001-01-01

    The conventional approach to shape from stereo is via feature extraction and correspondences. This results in estimates of the camera parameters and a typically spare estimate of the surface. Given a set of calibrated images, a dense surface reconstruction is possible by minimizing the error between the observed image and the image rendered from the estimated surface with respect to the surface model parameters. Given an uncalibrated image and an estimated surface, the camera parameters can be estimated by minimizing the error between the observed and rendered images a function of the camera parameters. We use a very small dense set of matched features to provide camera parameter estimates for the initial dense surface estimate. We then re-estimate the camera parameters as described above, and then re-estimate the surface. This process is iterated. Whilst it can not be proven to converge, we have found that around three iterations results in excellent surface and camera parameters estimates.

  18. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  19. 3D Building Evacuation Route Modelling and Visualization

    NASA Astrophysics Data System (ADS)

    Chan, W.; Armenakis, C.

    2014-11-01

    The most common building evacuation approach currently applied is to have evacuation routes planned prior to these emergency events. These routes are usually the shortest and most practical path from each building room to the closest exit. The problem with this approach is that it is not adaptive. It is not responsively configurable relative to the type, intensity, or location of the emergency risk. Moreover, it does not provide any information to the affected persons or to the emergency responders while not allowing for the review of simulated hazard scenarios and alternative evacuation routes. In this paper we address two main tasks. The first is the modelling of the spatial risk caused by a hazardous event leading to choosing the optimal evacuation route for a set of options. The second is to generate a 3D visual representation of the model output. A multicriteria decision making (MCDM) approach is used to model the risk aiming at finding the optimal evacuation route. This is achieved by using the analytical hierarchy process (AHP) on the criteria describing the different alternative evacuation routes. The best route is then chosen to be the alternative with the least cost. The 3D visual representation of the model displays the building, the surrounding environment, the evacuee's location, the hazard location, the risk areas and the optimal evacuation pathway to the target safety location. The work has been performed using ESRI's ArcGIS. Using the developed models, the user can input the location of the hazard and the location of the evacuee. The system then determines the optimum evacuation route and displays it in 3D.

  20. Fluorescent stereo microscopy for 3D surface profilometry and deformation mapping.

    PubMed

    Hu, Zhenxing; Luo, Huiyang; Du, Yingjie; Lu, Hongbing

    2013-05-20

    Recently, mechanobiology has received increased attention. For investigation of biofilm and cellular tissue, measurements of the surface topography and deformation in real-time are a pre-requisite for understanding the growth mechanisms. In this paper, a novel three-dimensional (3D) fluorescent microscopic method for surface profilometry and deformation measurements is developed. In this technique a pair of cameras are connected to a binocular fluorescent microscope to acquire micrographs from two different viewing angles of a sample surface doped or sprayed with fluorescent microparticles. Digital image correlation technique is used to search for matching points in the pairing fluorescence micrographs. After calibration of the system, the 3D surface topography is reconstructed from the pair of planar images. When the deformed surface topography is compared with undeformed topography using fluorescent microparticles for movement tracking of individual material points, the full field deformation of the surface is determined. The technique is demonstrated on topography measurement of a biofilm, and also on surface deformation measurement of the biofilm during growth. The use of 3D imaging of the fluorescent microparticles eliminates the formation of bright parts in an image caused by specular reflections. The technique is appropriate for non-contact, full-field and real-time 3D surface profilometry and deformation measurements of materials and structures at the microscale.

  1. SERVIR Viz: A 3D Visualization Tool for Mesoamerica.

    NASA Astrophysics Data System (ADS)

    Mercurio, M.; Coughlin, J.; Deneau, D.

    2007-05-01

    SERVIR Viz is a customized version of NASA's WorldWind, which is a freely distributed, open-source, web- enabled, 3D earth exploration tool. IAGT developed SERVIR Viz in a joint effort with SERVIR research partners to create a visualization framework for geospatial data resources available to the SERVIR project. SERVIR Viz is customized by providing users with newly developed custom tools, enhancements to existing open source tools and a specialized toolbar that allows shortcut access to existing tools. Another key feature is the ability to visualize remotely-hosted framework GIS data layers, maps, real-time satellite images, and other SERVIR products relevant to the Mesoamerica region using the NASA WorldWind visualization engine and base mapping layers. The main users of SERVIR Viz are the seven countries of Mesoamerica, SERVIR participants, educators, scientists, decision-makers, and the general public. SERVIR Viz enhances the SERVIR project infrastructure by providing access to NASA GEOSS data products and internet served Mesoamerica centric GIS data products within a tool developed specifically to promote use of GIS and visualization technologies in the decision support goals of the SERVIR project. In addition, SERVIZ Viz can be toggled between English and Spanish to support a wide cross section of users and development still continues to support new data and user requirements. This presentation will include a live demonstration of SERVIR Viz.

  2. JHelioviewer: Visualizing the Sun and Heliosphere in 3D

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Spoerri, S.; Pagel, S.

    2012-12-01

    The next generation of heliophysics missions, Solar Orbiter and Solar Probe Plus, will focus on exploring the linkage between the Sun and the heliosphere. These new missions will collect unique data that will allow us to study, e.g., the coupling between macroscopic physical processes to those on kinetic scales, the generation of solar energetic particles and their propagation into the heliosphere and the origin and acceleration of solar wind plasma. Already today, NASA's Solar Dynamics Observatory returns 1.4 TB/day of high-resolution solar images, magnetograms and EUV irradiance data. Within a few years, the scientific community will thus have access to petabytes of multidimensional remote-sensinng and complex in-situ observations from different vantage points, complemented by petabytes of simulation data. Answering overarching science questions like "How do solar transients drive heliospheric variability and space weather?" will only be possible if the community has the necessary tools at hand. As of today, there is an obvious lack of capability to both visualize these data and assimilate them into sophisticated models to advance our knowledge. A key piece needed to bridge the gap between observables, derived quantities like vector fields and model output is a tool to routinely and intuitively visualize large heterogeneous, multidimensional, time-dependent data sets. While a few tools exist to visualize, e.g., 3D data sets for a small number of time steps, the space sciences community is lacking the equipment to do this (i) on a routine basis, (ii) for complex multidimensional data sets from various instruments and vantage points and (iii) in an extensible and modular way that is open for future improvements and interdisciplinary usage. In this contribution, we will present recent progress in visualizing the Sun and its magnetic field in 3D using the open-source JHelioviewer framework, which is part of the ESA/NASA Helioviewer Project. Among other features

  3. Planetary subsurface investigation by 3D visualization model .

    NASA Astrophysics Data System (ADS)

    Seu, R.; Catallo, C.; Tragni, M.; Abbattista, C.; Cinquepalmi, L.

    Subsurface data analysis and visualization represents one of the main aspect in Planetary Observation (i.e. search for water or geological characterization). The data are collected by subsurface sounding radars as instruments on-board of deep space missions. These data are generally represented as 2D radargrams in the perspective of space track and z axes (perpendicular to the subsurface) but without direct correlation to other data acquisition or knowledge on the planet . In many case there are plenty of data from other sensors of the same mission, or other ones, with high continuity in time and in space and specially around the scientific sites of interest (i.e. candidate landing areas or particular scientific interesting sites). The 2D perspective is good to analyse single acquisitions and to perform detailed analysis on the returned echo but are quite useless to compare very large dataset as now are available on many planets and moons of solar system. The best way is to approach the analysis on 3D visualization model generated from the entire stack of data. First of all this approach allows to navigate the subsurface in all directions and analyses different sections and slices or moreover navigate the iso-surfaces respect to a value (or interval). The last one allows to isolate one or more iso-surfaces and remove, in the visualization mode, other data not interesting for the analysis; finally it helps to individuate the underground 3D bodies. Other aspect is the needs to link the on-ground data, as imaging, to the underground one by geographical and context field of view.

  4. A novel technique for visualizing high-resolution 3D terrain maps

    NASA Astrophysics Data System (ADS)

    Dammann, John

    2007-02-01

    A new technique is presented for visualizing high-resolution terrain elevation data. It produces realistic images at small scales on the order of the data resolution and works particularly well when natural objects are present. Better visualization at small scales opens up new applications, like site surveillance for security and Google Earth-type local search and exploration tasks that are now done with 2-D maps. The large 3-D maps are a natural for high-resolution stereo display. The traditional technique drapes a continuous surface over the regularly spaced elevation values. This technique works well when displaying large areas or in cities with large buildings, but falls apart at small scales or for natural objects like trees. The new technique visualizes the terrain as a set of disjoint square patches. It is combined with an algorithm that identifies smooth areas within the scene. Where the terrain is smooth, such as in grassy areas, roads, parking lots and rooftops, it warps the patches to create a smooth surface. For trees or shrubs or other areas where objects are under-sampled, however, the patches are left disjoint. This has the disadvantage of leaving gaps in the data, but the human mind is very adept at filling in this missing information. It has the strong advantage of making natural terrain look realistic, trees and bushes look stylized but still look natural and are easy to interpret. Also, it does not add artifacts to the map, like filling in blank vertical walls where there are alcoves and other structure and extending bridges and overpasses down to the ground. The new technique is illustrated using very large 1-m resolution 3-D maps from the Rapid Terrain Visualization (RTV) program, and comparisons are made with traditional visualizations using these maps.

  5. Estimating and Correcting Bias in Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Farboud-Sheshdeh, Sara

    Stereo visual odometry (VO) is a common technique for estimating a camera's motion; features are tracked across frames and the pose change is subsequently inferred. This method can play a particularly important role in environments where the global positioning system (GPS) is not available (e.g., Mars rovers). Recently, some authors have noticed a bias in VO position estimates that grows with distance travelled; this can cause the resulting estimate to become highly inaccurate. In this thesis, two effects are identified at play in stereo VO bias: first, the inherent bias in the maximum-likelihood estimation framework, and second, the disparity threshold used to discard far-away and erroneous observations. To estimate the bias, the sigma-point method (with modification) combined with the concept of bootstrap bias estimation is proposed. This novel method achieves similar accuracy to Monte Carlo experiments, but at a fraction of the computational cost. The approach is validated through simulations.

  6. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  7. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  8. Deformation analysis of a sinkhole in Thuringia using multi-temporal multi-view stereo 3D reconstruction data

    NASA Astrophysics Data System (ADS)

    Petschko, Helene; Goetz, Jason; Schmidt, Sven

    2017-04-01

    Sinkholes are a serious threat on life, personal property and infrastructure in large parts of Thuringia. Over 9000 sinkholes have been documented by the Geological Survey of Thuringia, which are caused by collapsing hollows which formed due to solution processes within the local bedrock material. However, little is known about surface processes and their dynamics at the flanks of the sinkhole once the sinkhole has shaped. These processes are of high interest as they might lead to dangerous situations at or within the vicinity of the sinkhole. Our objective was the analysis of these deformations over time in 3D by applying terrestrial photogrammetry with a simple DSLR camera. Within this study, we performed an analysis of deformations within a sinkhole close to Bad Frankenhausen (Thuringia) using terrestrial photogrammetry and multi-view stereo 3D reconstruction to obtain a 3D point cloud describing the morphology of the sinkhole. This was performed for multiple data collection campaigns over a 6-month period. The photos of the sinkhole were taken with a Nikon D3000 SLR Camera. For the comparison of the point clouds the Multiscale Model to Model Comparison (M3C2) plugin of the software CloudCompare was used. It allows to apply advanced methods of point cloud difference calculation which considers the co-registration error between two point clouds for assessing the significance of the calculated difference (given in meters). Three Styrofoam cuboids of known dimensions (16 cm wide/29 cm high/11.5 cm deep) were placed within the sinkhole to test the accuracy of the point cloud difference calculation. The multi-view stereo 3D reconstruction was performed with Agisoft Photoscan. Preliminary analysis indicates that about 26% of the sinkhole showed changes exceeding the co-registration error of the point clouds. The areas of change can mainly be detected on the flanks of the sinkhole and on an earth pillar that formed in the center of the sinkhole. These changes describe

  9. Visualizing 3D Fracture Morphology in Granular Media

    NASA Astrophysics Data System (ADS)

    Dalbe, M. J.; Juanes, R.

    2015-12-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  10. Visualizing 3D fracture morphology in granular media

    NASA Astrophysics Data System (ADS)

    Dalbe, Marie-Julie; Juanes, Ruben

    2015-11-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  11. Visualizing 3D velocity fields near contour surfaces

    SciTech Connect

    Max, N.; Crawfis, R.; Grant, C.

    1994-03-01

    Vector field rendering is difficult in 3D because the vector icons overlap and hide each other. We propose four different techniques for visualizing vector fields only near surfaces. The first uses motion blurred particles in a thickened region around the surface. The second uses a voxel grid to contain integral curves of the vector field. The third uses many antialiased lines through the surface, and the fourth uses hairs sprouting from the surface and then bending in the direction of the vector field. All the methods use the graphite pipeline, allowing real time rotation and interaction, and the first two methods can animate the texture to move in the flow determined by the velocity field.

  12. Does face recognition rely on encoding of 3-D surface? Examining the role of shape-from-shading and shape-from-stereo.

    PubMed

    Liu, C H; Collin, C A; Chaudhuri, A

    2000-01-01

    It is now well known that processing of shading information in face recognition is susceptible to bottom lighting and contrast reversal, an effect that may be due to a disruption of 3-D shape processing. The question then is whether the disruption can be rectified by other sources of 3-D information, such as shape-from-stereo. We examined this issue by comparing identification performance either with or without stereo information using top-lit and bottom-lit face stimuli in both photographic positive and negative conditions. The results show that none of the shading effects was reduced by the presence of stereo information. This finding supports the notion that shape-from-shading overrides shape-from-stereo in face perception. Although shape-from-stereo did produce some signs of facilitation for face identification, this effect was negligible. Together, our results support the view that 3-D shape processing plays only a minor role in face recognition. Our data are best accounted for by a weighted function of 2-D processing of shading pattern and 3-D processing of shapes, with a much greater weight assigned to 2-D pattern processing.

  13. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  14. A workflow for the 3D visualization of meteorological data

    NASA Astrophysics Data System (ADS)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have

  15. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  16. 3D visualization of numeric planetary data using JMARS

    NASA Astrophysics Data System (ADS)

    Dickenshied, S.; Christensen, P. R.; Anwar, S.; Carter, S.; Hagee, W.; Noss, D.

    2013-12-01

    JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. In addition to visualizing multiple datasets in context with one another, significant effort has been put into on-the-fly projection of georegistered data over surface topography. This functionality allows a user to easily create and modify 3D visualizations of any regional scene where elevation data is available in JMARS. This can be accomplished through the use of global topographic maps or regional numeric data such as HiRISE or HRSC DTMs. Users can also upload their own regional or global topographic dataset and use it as an elevation source for 3D rendering of their scene. The 3D Layer in JMARS allows the user to exaggerate the z-scale of any elevation source to emphasize the vertical variance throughout a scene. In addition, the user can rotate, tilt, and zoom the scene to any desired angle and then illuminate it with an artificial light source. This scene can be easily overlain with additional JMARS datasets such as maps, images, shapefiles, contour lines, or scale bars, and the scene can be easily saved as a graphic image for use in presentations or publications.

  17. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  18. 3D Orbit Visualization for Earth-Observing Missions

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.

    2011-01-01

    This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.

  19. Visual Presence: Viewing Geometry Visual Information of UHD S3D Entertainment.

    PubMed

    Oh, Heeseok; Lee, Sanghoon

    2016-05-11

    To maximize the presence experienced by humans, visual content has evolved to achieve a higher visual presence in a series of HD, UHD, 8K UHD, and 8K stereoscopic 3D (8K S3D). Several studies have introduced visual presence delivered from content when viewing UHD S3D from a content analysis perspective. Nevertheless, no clear definition has been presented for visual presence, and only a subjective evaluation has been relied upon. The main reason for this is that there is a limitation to defining visual presence via the use of content information itself. In this paper, we define the visual presence for each viewing environment, and investigate a novel methodology to measure the experienced visual presence when viewing both 2D and 3D via the definition of a new metric termed "VoVI" (volume of visual information) by quantifying the influence of the viewing geometry between the display and viewer. To achieve this goal, the viewing geometry and display parameters for both flat and atypical displays are analyzed in terms of human perception by introducing a novel concept of pixel-wise geometry. Additionally, perceptual weighting through analysis of content information is performed in accordance with monocular and binocular vision characteristics. In the experimental results, it is shown that the constructed model based on the viewing geometry, content and perceptual characteristics has a high correlation of about 84% with subjective evaluations.

  20. High-Performance Active Liquid Crystalline Shutters for Stereo Computer Graphics and Other 3-D Technologies

    NASA Astrophysics Data System (ADS)

    Sergan, Tatiana; Sergan, Vassili; MacNaughton, Boyd

    2007-03-01

    Stereoscopic computer displays create a 3-D image by alternating two separate images for each of the viewer's eyes. Field-sequential viewing systems supply each eye with the appropriate image by blocking the wrong image for the wrong eye. In our work, we have developed a new mode of operation of a liquid crystal shutter that provides for highly effective blockage of undesired images when the screen is viewed in all viewing directions and eliminates color shifts associated with long turn-off times. The goal was achieved by using a π-cell filled with low-rotational-viscosity and high-birefringence fluid and additional negative birefringence films with splay optic axis distribution. The shutter demonstrates a contrast ratio higher than 800:1 for head-on viewing and 10:1 in the viewing cone of about 45°. The relaxation time of the shutter does not exceed 2 ms and is the same for all three primary colors.

  1. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  2. A Dynamic Multi-Projection-Contour Approximating Framework for the 3D Reconstruction of Buildings by Super-Generalized Optical Stereo-Pairs.

    PubMed

    Yan, Yiming; Su, Nan; Zhao, Chunhui; Wang, Liguo

    2017-09-19

    In this paper, a novel framework of the 3D reconstruction of buildings is proposed, focusing on remote sensing super-generalized stereo-pairs (SGSPs). As we all know, 3D reconstruction cannot be well performed using nonstandard stereo pairs, since reliable stereo matching could not be achieved when the image-pairs are collected at a great difference of views, and we always failed to obtain dense 3D points for regions of buildings, and cannot do further 3D shape reconstruction. We defined SGSPs as two or more optical images collected in less constrained views but covering the same buildings. It is even more difficult to reconstruct the 3D shape of a building by SGSPs using traditional frameworks. As a result, a dynamic multi-projection-contour approximating (DMPCA) framework was introduced for SGSP-based 3D reconstruction. The key idea is that we do an optimization to find a group of parameters of a simulated 3D model and use a binary feature-image that minimizes the total differences between projection-contours of the building in the SGSPs and that in the simulated 3D model. Then, the simulated 3D model, defined by the group of parameters, could approximate the actual 3D shape of the building. Certain parameterized 3D basic-unit-models of typical buildings were designed, and a simulated projection system was established to obtain a simulated projection-contour in different views. Moreover, the artificial bee colony algorithm was employed to solve the optimization. With SGSPs collected by the satellite and our unmanned aerial vehicle, the DMPCA framework was verified by a group of experiments, which demonstrated the reliability and advantages of this work.

  3. A Dynamic Multi-Projection-Contour Approximating Framework for the 3D Reconstruction of Buildings by Super-Generalized Optical Stereo-Pairs

    PubMed Central

    Yan, Yiming; Su, Nan; Zhao, Chunhui; Wang, Liguo

    2017-01-01

    In this paper, a novel framework of the 3D reconstruction of buildings is proposed, focusing on remote sensing super-generalized stereo-pairs (SGSPs). As we all know, 3D reconstruction cannot be well performed using nonstandard stereo pairs, since reliable stereo matching could not be achieved when the image-pairs are collected at a great difference of views, and we always failed to obtain dense 3D points for regions of buildings, and cannot do further 3D shape reconstruction. We defined SGSPs as two or more optical images collected in less constrained views but covering the same buildings. It is even more difficult to reconstruct the 3D shape of a building by SGSPs using traditional frameworks. As a result, a dynamic multi-projection-contour approximating (DMPCA) framework was introduced for SGSP-based 3D reconstruction. The key idea is that we do an optimization to find a group of parameters of a simulated 3D model and use a binary feature-image that minimizes the total differences between projection-contours of the building in the SGSPs and that in the simulated 3D model. Then, the simulated 3D model, defined by the group of parameters, could approximate the actual 3D shape of the building. Certain parameterized 3D basic-unit-models of typical buildings were designed, and a simulated projection system was established to obtain a simulated projection-contour in different views. Moreover, the artificial bee colony algorithm was employed to solve the optimization. With SGSPs collected by the satellite and our unmanned aerial vehicle, the DMPCA framework was verified by a group of experiments, which demonstrated the reliability and advantages of this work. PMID:28925947

  4. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  5. The role of stereo vision in visual-vestibular integration.

    PubMed

    Butler, John S; Campos, Jennifer L; Bülthoff, Heinrich H; Smith, Stuart T

    2011-01-01

    Self-motion through an environment stimulates several sensory systems, including the visual system and the vestibular system. Recent work in heading estimation has demonstrated that visual and vestibular cues are typically integrated in a statistically optimal manner, consistent with Maximum Likelihood Estimation predictions. However, there has been some indication that cue integration may be affected by characteristics of the visual stimulus. Therefore, the current experiment evaluated whether presenting optic flow stimuli stereoscopically, or presenting both eyes with the same image (binocularly) affects combined visual-vestibular heading estimates. Participants performed a two-interval forced-choice task in which they were asked which of two presented movements was more rightward. They were presented with either visual cues alone, vestibular cues alone or both cues combined. Measures of reliability were obtained for both binocular and stereoscopic conditions. Group level analyses demonstrated that when stereoscopic information was available there was clear evidence of optimal integration, yet when only binocular information was available weaker evidence of cue integration was observed. Exploratory individual analyses demonstrated that for the stereoscopic condition 90% of participants exhibited optimal integration, whereas for the binocular condition only 60% of participants exhibited results consistent with optimal integration. Overall, these findings suggest that stereo vision may be important for self-motion perception, particularly under combined visual-vestibular conditions.

  6. Normative 3D opto-electronic stereo-photogrammetric posture and spine morphology data in young healthy adult population.

    PubMed

    D'Amico, Moreno; Kinel, Edyta; Roncoletta, Piero

    2017-01-01

    Observational cross-sectional study. The current study aims to yield normative data: i.e., the physiological standard for 30 selected quantitative 3D parameters that accurately capture and describe a full-skeleton, upright-standing attitude. Specific and exclusive consideration was given to three distinct categories: postural, spine morphology and pelvic parameters. To capture such 3D parameters, the authors selected a non-ionising 3D opto-electronic stereo-photogrammetric approach. This required the identification and measurement of 27 body landmarks, each specifically tagged with a skin marker. As subjects for the measurement of these parameters, a cohort of 124 asymptomatic young adult volunteers was recruited. All parameters were identified and measured within this group. Postural and spine morphology data have been compared between genders. In this regard, only five statistically significant differences were found: pelvis width, pelvis torsion, the "lumbar" lordosis angle value, the lumbar curve length, and the T12-L5 anatomically-bound lumbar angle value. The "thoracic" kyphosis mean angle value was the same in both sexes and, even if, derived from skin markers placed on spinous processes it resulted in perfect agreement with the X-ray based literature. As regards lordosis, a direct comparison was more difficult because methods proposed in the literature differ as to the number and position of vertebrae under consideration, and their related angle values. However, when the L1 superior-L5 inferior end plate Cobb angle was considered, these results aligned strongly with the existing literature. Asymmetry was a standard postural-spinal feature for both sexes. Each subject presented some degree of leg length discrepancy (LLD) with μ = 9.37mm. This was associated with four factors: unbalanced posture and/or underfoot loads, spinal curvature in the frontal plane, and pelvis torsion. This led to the additional study of the effect of LLD equalisation influence on

  7. Normative 3D opto-electronic stereo-photogrammetric posture and spine morphology data in young healthy adult population

    PubMed Central

    2017-01-01

    Design: Observational cross-sectional study. The current study aims to yield normative data: i.e., the physiological standard for 30 selected quantitative 3D parameters that accurately capture and describe a full-skeleton, upright-standing attitude. Specific and exclusive consideration was given to three distinct categories: postural, spine morphology and pelvic parameters. To capture such 3D parameters, the authors selected a non-ionising 3D opto-electronic stereo-photogrammetric approach. This required the identification and measurement of 27 body landmarks, each specifically tagged with a skin marker. As subjects for the measurement of these parameters, a cohort of 124 asymptomatic young adult volunteers was recruited. All parameters were identified and measured within this group. Postural and spine morphology data have been compared between genders. In this regard, only five statistically significant differences were found: pelvis width, pelvis torsion, the “lumbar” lordosis angle value, the lumbar curve length, and the T12-L5 anatomically-bound lumbar angle value. The “thoracic” kyphosis mean angle value was the same in both sexes and, even if, derived from skin markers placed on spinous processes it resulted in perfect agreement with the X-ray based literature. As regards lordosis, a direct comparison was more difficult because methods proposed in the literature differ as to the number and position of vertebrae under consideration, and their related angle values. However, when the L1 superior–L5 inferior end plate Cobb angle was considered, these results aligned strongly with the existing literature. Asymmetry was a standard postural-spinal feature for both sexes. Each subject presented some degree of leg length discrepancy (LLD) with μ = 9.37mm. This was associated with four factors: unbalanced posture and/or underfoot loads, spinal curvature in the frontal plane, and pelvis torsion. This led to the additional study of the effect of LLD

  8. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    EPA Science Inventory

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  9. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    EPA Science Inventory

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  10. Stereo Visualized Animation Content for the Learning of the Mechanism of Engine

    NASA Astrophysics Data System (ADS)

    Sato, Tomoaki

    The stereo visualized animation contents have not been spread on the domain of education because of the difficulties and high cost of making that. On this study, we proposed an easy and low cost method of the stereo visualization for 3DCG-Animation. The easy and low cost stereo visualization system is composed of two ordinary liquid crystal projectors, a personal computer and a silver screen. In this paper, by using this method, we developed a stereo visualized 3DCG-animation content of 4 stroke cycle gasoline engine and used the content for the class of the machining practice exercise. Moreover, the learning effect of the content was examined with the questionnaire. The result of questionnaire showed that the stereo visualized 3DCG animation content was very helpful to understand the mechanism of engine.

  11. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  12. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  13. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  14. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  15. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  16. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components.

  17. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  18. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  19. Visualization of gravitational potential wells using 3D printing technology

    NASA Astrophysics Data System (ADS)

    Su, Jun; Wang, Weiguo; Lu, Meishu; Xu, Xinran; Yan, Qi Fan; Lu, Jianlong

    2016-12-01

    There have been many studies of the dynamics of a ball rolling on different types of surfaces. Most of these studies have been theoretical, with only a few experimental. We have found that 3D printing offers a novel experimental approach to investigating this topic. In this paper, we use a 3D printer to create four different surfaces and experimentally investigate the dynamics of a ball rolling on these surfaces. Our results are then compared to theory.

  20. NASA VERVE: Interactive 3D Visualization Within Eclipse

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar; Allan, Mark B.

    2014-01-01

    At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.

  1. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  2. 3D Visualization of an Invariant Display Strategy for Hyperspectral Imagery

    DTIC Science & Technology

    2002-12-01

    Region of Interest (ROI) in HSV color space model in 3D, and viewing the 2D resultant image. A demonstration application uses Java language...Visualization, X3D, Java Xj3d API 15. NUMBER OF PAGES 106 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY...application uses Java language including Java2D, Xj3D Player, Document Object Model (DOM) Application Program Interfaces (API), and Extensible 3D Language

  3. 3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image (Open Access)

    DTIC Science & Technology

    2013-06-28

    cinematography where the shot composition and camera viewpoint is optimized for visual weight [1]. In cinema , a shot is either a long shot, a medium...zw into the equation for yw and ignoring small terms we get, yw(vib − vi0) = ywc (vi − vib) (2) This equation relates the world height of a face (yw...members of the group. Now, if we assume that the inliers that are physically close to the outlier in the world also have a similar pose, then we can

  4. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  5. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  6. 3-D Unsteady Flow Visualization in Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    Presently, there are very few visualization systems available for time-dependent flow fields. Although existing visualization systems for instantaneous flow fields may be used to view time-dependent flow fields at discrete points in time, the time variable is usually not considered in the visualization technique. A simple and effective approach for visualizing time-dependent flow fields using streaklines is presented. Some results from this approach are shown.

  7. Extracting a high-quality data space for stereo-tomography based on a 3D structure tensor algorithm and kinematic de-migration

    NASA Astrophysics Data System (ADS)

    Xiong, Kai; Yang, Kai; Wang, Yu-Xiang

    2017-08-01

    To extract a high-quality data space (the so-called kinematic invariants) is a key factor to a successful implementation of stereo-tomography. The structure tensor algorithm demonstrated itself a robust tool to pick the kinematic invariants for stereo-tomography. However, if there are lots of diffractions and other noises in the data, it could be risky to extract the data space from the data domain. Meanwhile, for any reflector, we try to pick all the relevant primary reflections as much as possible within a wide offset range. To achieve this, in this paper, we design a scheme to extract a high-quality data space for stereo-tomography based on 3D structure tensor and kinematic de-migration. Firstly, we apply an automatic, dense volumetric picking for residual move-out (RMO) and the structural dip in the depth-migrated domain with an advanced 3D structure tensor algorithm. Then, a set of key horizons are picked manually in a few selected depth-migrated common offset gathers. Finally, all the picked horizons are extrapolated along the offset axis based on the RMO information picked in advance. Thus, the initial high-density points picked in the depth-migrated volume are greatly refined. After this processing, a final and refined data space for stereo-tomography is extracted through a kinematic de-migration. We demonstrate the correctness and the robustness of the presented scheme with synthetic and real data examples.

  8. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  9. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  10. An Analysis and Proposal of 3D Printing Applications for the Visually Impaired.

    PubMed

    Minatani, Kazunori

    2017-01-01

    The full 3D printing process is divided into discrete 3 steps. With user-centric approach, the study confirmed that people with visual impairments could use CAD to carry out 3D printing tasks by managing the entire process as long as they had a certain amount of 3D data.

  11. Use and Evaluation of 3D GeoWall Visualizations in Undergraduate Space Science Classes

    NASA Astrophysics Data System (ADS)

    Turner, N. E.; Hamed, K. M.; Lopez, R. E.; Mitchell, E. J.; Gray, C. L.; Corralez, D. S.; Robinson, C. A.; Soderlund, K. M.

    2005-12-01

    One persistent difficulty many astronomy students face is the lack of 3- dimensional mental model of the systems being studied, in particular the Sun-Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause, for example, the cycle of lunar phases or the pattern of seasons. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs for use in geology classes, but as yet very little work has been done involving the GeoWall for astronomy classes. We present results from a large study involving over 1000 students of varied backgrounds: some students were tested at the University of Texas at El Paso, a large public university on the US-Mexico border and other students were from the Florida Institute of Technology, a small, private, technical school in Melbourne Florida. We wrote a lecture tutorial-style lab to go along with a GeoWall 3D visual of the Earth-Moon system and tested the students before and after with several diagnostics. Students were given pre and post tests using the Lunar Phase Concept Inventory (LPCI) as well as a separate evaluation written specifically for this project. We found the lab useful for both populations of students, but not equally effective for all. We discuss reactions from the students and their improvement, as well as whether the students are able to correctly assess the usefullness of the project for their own learning.

  12. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  13. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  14. Visualization of the NASA ICON mission in 3d

    NASA Astrophysics Data System (ADS)

    Mendez, R. A., Jr.; Immel, T. J.; Miller, N.

    2016-12-01

    The ICON Explorer mission (http://icon.ssl.berkeley.edu) will provide several data products for the atmosphere and ionosphere after its launch in 2017. This project will support the mission by investigating the capability of these tools for visualization of current and predicted observatory characteristics and data acquisition. Visualization of this mission can be accomplished using tools like Google Earth or CesiumJS, as well assistance from Java or Python. Ideally we will bring this visualization into the homes of people without the need of additional software. The path of launching a standalone website, building this environment, and a full toolkit will be discussed. Eventually, the initial work could lead to the addition of a downloadable visualization packages for mission demonstration or science visualization.

  15. 3D Visualization of HIV Virions by Cryoelectron Tomography

    PubMed Central

    Liu, Jun; Wright, Elizabeth R.; Winkler, Hanspeter

    2011-01-01

    The structure of the human immunodeficiency virus (HIV) and some of its components have been difficult to study in three-dimensions (3D) primarily because of their intrinsic structural variability. Recent advances in cryoelectron tomography (cryo-ET) have provided a new approach for determining the 3D structures of the intact virus, the HIV capsid, and the envelope glycoproteins located on the viral surface. A number of cryo-ET procedures related to specimen preservation, data collection, and image processing are presented in this chapter. The techniques described herein are well suited for determining the ultrastructure of bacterial and viral pathogens and their associated molecular machines in situ at nanometer resolution. PMID:20888479

  16. 3D model of the Bernese Part of the Swiss Molasse Basin: visualization of uncertainties in a 3D model

    NASA Astrophysics Data System (ADS)

    Mock, Samuel; Allenbach, Robin; Reynolds, Lance; Wehrens, Philip; Kurmann-Matzenauer, Eva; Kuhn, Pascal; Michael, Salomè; Di Tommaso, Gennaro; Herwegh, Marco

    2016-04-01

    The Swiss Molasse Basin comprises the western and central part of the North Alpine Foreland Basin. In recent years it has come under closer scrutiny due to its promising geopotentials such as geothermal energy and CO2 sequestration. In order to adress these topics good knowledge of the subsurface is a key prerequisite. For that matter, geological 3D models serve as valuable tools. In collaboration with the Swiss Geological Survey (swisstopo) and as part of the project GeoMol CH, a geological 3D model of the Swiss Molasse Basin in the Canton of Bern has been built. The model covers an area of 1810 km2and reaches depth of up to 6.7 km. It comprises 10 major Cenozoic and Mesozoic units and numerous faults. The 3D model is mainly based on 2D seismic data complemented by information from few deep wells. Additionally, data from geological maps and profiles were used for refinement at shallow depths. In total, 1163 km of reflection seismic data, along 77 seismic lines, have been interpreted by different authors with respect to stratigraphy and structures. Both, horizons and faults, have been interpreted in 2D and modelled in 3D using IHS's Kingdom Suite and Midland Valley's MOVE software packages, respectively. Given the variable degree of subsurface information available, each 3D model is subject of uncertainty. With the primary input data coming from interpretation of reflection seismic data, a variety of uncertainties comes into play. Some of them are difficult to address (e.g. author's style of interpretation) while others can be quantified (e.g. mis-tie correction, well-tie). An important source of uncertainties is the quality of seismic data; this affects the traceability and lateral continuation of seismic reflectors. By defining quality classes we can semi-quantify this source of uncertainty. In order to visualize the quality and density of the input data in a meaningful way, we introduce quality-weighted data density maps. In combination with the geological 3D

  17. 3D visualization of membrane failures in fuel cells

    NASA Astrophysics Data System (ADS)

    Singh, Yadvinder; Orfino, Francesco P.; Dutta, Monica; Kjeang, Erik

    2017-03-01

    Durability issues in fuel cells, due to chemical and mechanical degradation, are potential impediments in their commercialization. Hydrogen leak development across degraded fuel cell membranes is deemed a lifetime-limiting failure mode and potential safety issue that requires thorough characterization for devising effective mitigation strategies. The scope and depth of failure analysis has, however, been limited by the 2D nature of conventional imaging. In the present work, X-ray computed tomography is introduced as a novel, non-destructive technique for 3D failure analysis. Its capability to acquire true 3D images of membrane damage is demonstrated for the very first time. This approach has enabled unique and in-depth analysis resulting in novel findings regarding the membrane degradation mechanism; these are: significant, exclusive membrane fracture development independent of catalyst layers, localized thinning at crack sites, and demonstration of the critical impact of cracks on fuel cell durability. Evidence of crack initiation within the membrane is demonstrated, and a possible new failure mode different from typical mechanical crack development is identified. X-ray computed tomography is hereby established as a breakthrough approach for comprehensive 3D characterization and reliable failure analysis of fuel cell membranes, and could readily be extended to electrolyzers and flow batteries having similar structure.

  18. 3D Display Calibration by Visual Pattern Analysis.

    PubMed

    Hwang, Hyoseok; Chang, Hyun Sung; Nam, Dongkyung; Kweon, In So

    2017-02-06

    Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2 s for computation; and is robust to noise, working well in the SNR regime as low as 6 dB.

  19. ProSAT+: visualizing sequence annotations on 3D structure.

    PubMed

    Stank, Antonia; Richter, Stefan; Wade, Rebecca C

    2016-08-01

    PRO: tein S: tructure A: nnotation T: ool-plus (ProSAT(+)) is a new web server for mapping protein sequence annotations onto a protein structure and visualizing them simultaneously with the structure. ProSAT(+) incorporates many of the features of the preceding ProSAT and ProSAT2 tools but also provides new options for the visualization and sharing of protein annotations. Data are extracted from the UniProt KnowledgeBase, the RCSB PDB and the PDBe SIFTS resource, and visualization is performed using JSmol. User-defined sequence annotations can be added directly to the URL, thus enabling visualization and easy data sharing. ProSAT(+) is available at http://prosat.h-its.org.

  20. 3D MR angiographic visualization and artery-vein separation

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Udupa, Jayaram K.; Saha, Punam K.; Odhner, Dewey

    1999-05-01

    The common approach for artery-vein separation applies a presaturation pulse to obtain different image intensity representations in MRA data for arteries and veins. However, when arteries and veins do not run in opposite directions as in the brain, lungs, and heart, this approach fails. This paper presents an image processing approach devised for artery-vein separation. The anatomic separation utilizes fuzzy connected object delineation. The first step of this separation method is the segmentation of the entire vessel structure from the background via absolute connectedness by using scale-based affinity. The second step is to separate artery from vein via relative connectedness. After 'seed' points are specified inside artery and vein in the vessel- only image, the operation is performed in an iterative fashion. The small regions of the bigger aspects of artery and vein are separated in the initial iteration. Further regions are added with the subsequent iterations so that the small aspects of artery and vein are included in alter iterations. Shell rendering is used for 3D display. Combining the strengths of fuzzy connected object definition, object separation, and shell rendering, high- quality volume rendering of vascular information in MRA data has been achieved. MS-325 contrast-enhanced MRA were used to illustrate this approach. Several examples of 3D display of arteries and veins are included to show the considerable promise of this new approach.

  1. The Visual Priming of Motion-Defined 3D Objects.

    PubMed

    Jiang, Xiong; Jiang, Yang; Parasuraman, Raja

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a "cloudy" SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a "cloudy" SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus--but not a static image or a semantic stimulus--that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed.

  2. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  3. Visual comparability of 3D regular sampling and reconstruction.

    PubMed

    Meng, Tai; Entezari, Alireza; Smith, Benjamin; Möller, Torsten; Weiskopf, Daniel; Kirkpatrick, Arthur E

    2011-10-01

    The Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices have been analytically shown to be more efficient sampling lattices than the traditional Cartesian Cubic (CC) lattice, but there has been no estimate of their visual comparability. Two perceptual studies (each with N = 12 participants) compared the visual quality of images rendered from BCC and FCC lattices to images rendered from the CC lattice. Images were generated from two signals: the commonly used Marschner-Lobb synthetic function and a computed tomography scan of a fish tail. Observers found that BCC and FCC could produce images of comparable visual quality to CC, using 30-35 percent fewer samples. For the images used in our studies, the L(2) error metric shows high correlation with the judgement of human observers. Using the L(2) metric as a proxy, the results of the experiments appear to extend across a wide range of images and parameter choices. © 2011 IEEE

  4. 3-D Grab!

    NASA Astrophysics Data System (ADS)

    Connors, M. G.; Schofield, I. S.

    2012-12-01

    Modern technologies in imaging greatly extend the potential to present visual information. With recently developed software tools, the perception of the third dimension can not only dramatically enhance presentation, but also allow spatial data to be better encoded. 3-D images can be taken for many subjects with only one camera, carefully moved to generate a stereo pair. Color anaglyph viewing now can be very effective using computer screens, and active filter technologies can enhance visual effects with ever-decreasing cost. We will present various novel results of 3-D imaging, including those from the auroral observations of the new twinned Athabasca University Geophysical Observatories.; Single camera stereo image for viewing with red/cyan glasses.

  5. Micro3D: computer program for three-dimensional reconstruction visualization, and analysis of neuronal populations and barin regions.

    PubMed

    Bjaalie, Jan G; Leergaard, Trygve B; Pettersen, Christian

    2006-04-01

    This article presents a computer program, Micro3D, designed for 3-D reconstruction, visualization, and analysis of coordinate-data (points and lines) recorded from serial sections. The software has primarily been used for studying shapes and dimension of brain regions (contour line data) and distributions of cellular elements such as neuronal cell bodies or axonal terminal fields labeled with tract-tracing techniques (point data). The tissue elements recorded could equally well be labeled with use of other techniques, the only requirement being that the data collected are saved as x,y,z coordinates. Data are typically imported from image-combining computerized microscopy systems or image analysis systems, such as Neurolucida (MicroBrightField, Colchester, VT) or analySIS (Soft Imaging System, Gmbh, Münster, Germany). System requirements are a PC running LINUX. Reconstructions in Micro3D may be rotated and zoomed in real-time, and submitted to perspective viewing and stereo-imaging. Surfaces are re-synthesized on the basis of stacks of contour lines. Clipping is used for defining section-independent subdivisions of the reconstruction. Flattening of curved sheets of points layers (e.g., neurons in a layer) facilitates inspection of complicated distribution patterns. Micro3D computes color-coded density maps. Opportunities for translation of data from different reconstructions into common coordinate systems are also provided. This article demonstrates the use of Micro3D for visualization of complex neuronal distribution patterns in somatosensory and auditory systems. The software is available for download on conditions posted at the NeSys home pages (http://www.nesys.uio.no/) and at The Rodent Brain Workbench (http://www.rbwb.org/).

  6. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  7. 2D-to-3D conversion by using visual attention analysis

    NASA Astrophysics Data System (ADS)

    Kim, Jiwon; Baik, Aron; Jung, Yong Ju; Park, Dusik

    2010-02-01

    This paper proposes a novel 2D-to-3D conversion system based on visual attention analysis. The system was able to generate stereoscopic video from monocular video in a robust manner with no human intervention. According to our experiment, visual attention information can be used to provide rich 3D experience even when depth cues from monocular view are not enough. Using the algorithm introduced in the paper, 3D display users can watch 2D media in 3D. In addition, the algorithm can be embedded into 3D displays in order to deliver better viewing experience with more immersive feeling. Using visual attention information to give a 3D effect is first tried in this research as far as we know.

  8. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  9. 3D visualization of unsteady 2D airplane wake vortices

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Zheng, Z. C.

    1994-01-01

    Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.

  10. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    NASA Astrophysics Data System (ADS)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  11. Three-dimensional (3D) shadowgraph technique visualizes thermal convection

    NASA Astrophysics Data System (ADS)

    Huang, Jinzi; Zhang, Jun; Physics; Maths Research Institutes, NYU Shanghai Team; Applied Maths Lab, NYU Team

    2016-11-01

    Shadowgraph technique has been widely used in thermal convection, and in other types of convection and advection processes in fluids. The technique reveals minute density differences in the fluid, which is otherwise transparent to the eyes and to light-sensitive devices. However, such technique normally integrates the fluid information along the depth of view and collapses the 3D density field onto a 2D plane. In this work, we introduce a stereoscopic shadowgraph technique that preserves the information of the fluid depth by using two cross-field shadowgraphs. The two shadowgraphs are coded with different and complementary colors, and each is seen by only one eye of the viewer. The two shadowgraphs can also be temporally modulated to achieve the same stereoscopic vision of the convective fluid. We further discuss ways to make use of this technique in order to extract useful information for research in fluids.

  12. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    the zSpace semi-immersive virtual reality display system using ray tracing for rendering at an interactive rate. The zSpace display system supports...head-tracked stereoscopic display, and stylus-based 3-D interaction. Further, the zSpace virtual reality system requires very little calibration or...visualization, 3-D interactive visualization, scientific visualization, virtual reality, real-time ray tracing 16. SECURITY CLASSIFICATION OF: 17

  13. KENO3D Visualization Tool for KENO V.a and KENO-VI Geometry Models

    SciTech Connect

    Horwedel, J.E.; Bowman, S.M.

    2000-06-01

    Criticality safety analyses often require detailed modeling of complex geometries. Effective visualization tools can enhance checking the accuracy of these models. This report describes the KENO3D visualization tool developed at the Oak Ridge National Laboratory (ORNL) to provide visualization of KENO V.a and KENO-VI criticality safety models. The development of KENO3D is part of the current efforts to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system.

  14. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  15. New software for visualizing 3D geological data in coal mines

    NASA Astrophysics Data System (ADS)

    Lee, Sungjae; Choi, Yosoon

    2015-04-01

    This study developed new software to visualize 3D geological data in coal mines. The Visualization Tool Kit (VTK) library and Visual Basic.NET 2010 were used to implement the software. The software consists of several modules providing functionalities: (1) importing and editing borehole data; (2) modelling of coal seams in 3D; (3) modelling of coal properties using 3D ordinary Kriging method; (4) calculating economical values of 3D blocks; (5) pit boundary optimization for identifying economical coal reserves based on the Lerchs-Grosmann algorithm; and (6) visualizing 3D geological, geometrical and economical data. The software has been applied to a small-scale open-pit coal mine in Indonesia revealed that it can provide useful information supporting the planning and design of open-pit coal mines.

  16. New techniques in 3D scalar and vector field visualization

    SciTech Connect

    Max, N.; Crawfis, R.; Becker, B.

    1993-05-05

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ``splatting`` scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ``flow volume`` of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity.

  17. Interactive Visualization of 3D Medical Data. Revision

    DTIC Science & Technology

    1989-04-01

    difficult and error-prone. It has long been recognized that computer-generated imagery might be an effective means for presenting three-dimensional...or ten years. ., App,. T- a. iuEE: Cm!te,.AL-: " D’t-,,:-. r : T ) P0Jr in IEEE C ~rnputer , Auqust -- 1989.- m. m m I j! RENDERING TECHNIQUES Three...ineractive setting. Initial visualizations made without the benefit of object definition would he used to guide scene analysis and segmentation algorithms

  18. Haptic perception disambiguates visual perception of 3D shape.

    PubMed

    Wijntjes, Maarten W A; Volcic, Robert; Pont, Sylvia C; Koenderink, Jan J; Kappers, Astrid M L

    2009-03-01

    We studied the influence of haptics on visual perception of three-dimensional shape. Observers were shown pictures of an oblate spheroid in two different orientations. A gauge-figure task was used to measure their perception of the global shape. In the first two sessions only vision was used. The results showed that observers made large errors and interpreted the oblate spheroid as a sphere. They also misinterpreted the rotated oblate spheroid for a prolate spheroid. In two subsequent sessions observers were allowed to touch the stimulus while performing the task. The visual input remained unchanged: the observers were looking at the picture and could not see their hands. The results revealed that observers perceived a shape that was different from the vision-only sessions and closer to the veridical shape. Whereas, in general, vision is subject to ambiguities that arise from interpreting the retinal projection, our study shows that haptic input helps to disambiguate and reinterpret the visual input more veridically.

  19. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  20. ProteinVista: a fast molecular visualization system using Microsoft Direct3D.

    PubMed

    Park, Chan-Yong; Park, Sung-Hee; Park, Soo-Jun; Park, Sun-Hee; Hwang, Chi-Jung

    2008-09-01

    Many tools have been developed to visualize protein and molecular structures. Most high quality protein visualization tools use the OpenGL graphics library as a 3D graphics system. Currently, the performance of recent 3D graphics hardware has rapidly improved. Recent high-performance 3D graphics hardware support Microsoft Direct3D graphics library more than OpenGL and have become very popular in personal computers (PCs). In this paper, a molecular visualization system termed ProteinVista is proposed. ProteinVista is well-designed visualization system using the Microsoft Direct3D graphics library. It provides various visualization styles such as the wireframe, stick, ball and stick, space fill, ribbon, and surface model styles, in addition to display options for 3D visualization. As ProteinVista is optimized for recent 3D graphics hardware platforms and because it uses a geometry instancing technique, its rendering speed is 2.7 times faster compared to other visualization tools.

  1. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  2. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  3. Advanced Visualization and Analysis of Climate Data using DV3D and UV-CDAT

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.

    2012-12-01

    This paper describes DV3D, a Vistrails package of high-level modules for the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) interactive visual exploration system that enables exploratory analysis of diverse and rich data sets stored in the Earth System Grid Federation (ESGF). DV3D provides user-friendly workflow interfaces for advanced visualization and analysis of climate data at a level appropriate for scientists. The application builds on VTK, an open-source, object-oriented library, for visualization and analysis. DV3D provides the high-level interfaces, tools, and application integrations required to make the analysis and visualization power of VTK readily accessible to users without exposing burdensome details such as actors, cameras, renderers, and transfer functions. It can run as a desktop application or distributed over a set of nodes for hyperwall or distributed visualization applications. DV3D is structured as a set of modules which can be linked to create workflows in Vistrails. Figure 1 displays a typical DV3D workflow as it would appear in the Vistrails workflow builder interface of UV-CDAT and, on the right, the visualization spreadsheet output of the workflow. Each DV3D module encapsulates a complex VTK pipeline with numerous supporting objects. Each visualization module implements a unique interactive 3D display. The integrated Vistrails visualization spreadsheet offers multiple synchronized visualization displays for desktop or hyperwall. The currently available displays include volume renderers, volume slicers, 3D isosurfaces, 3D hovmoller, and various vector plots. The DV3D GUI offers a rich selection of interactive query, browse, navigate, and configure options for all displays. All configuration operations are saved as Vistrails provenance. DV3D's seamless integration with UV-CDAT's climate data management system (CDMS) and other climate data analysis tools provides a wide range of climate data analysis operations, e

  4. Jigsaw-Puzzle-Like 3D Glyphs for Visualization of Grammatical Constraints

    NASA Astrophysics Data System (ADS)

    Osawa, Noritaka

    Three-dimensional visualization using jigsaw-puzzle-like glyphs, or shapes, is proposed as a means of representing grammatical constraints in programming. The proposed visualization uses 3D glyphs such as convex, concave, and wireframe shapes. A semantic constraint, such as a type constraint in an assignment, is represented by an inclusive match between 3D glyphs. An application of the proposed visualization method to a subset of the Java programming language is demonstrated. An experimental evaluation showed that the 3D glyphs are easier to learn and enable users to more quickly understand their relationships than 2D glyphs and 1D symbol sequences.

  5. 3D shape modeling by integration visual and tactile cues

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2015-10-01

    With the progress in CAD (Computer Aided Design) systems, many mechanical components can be designed efficiently with high precision. But, such a system is unfit for some organic shapes, for example, a toy. In this paper, an easy way to dealing with such shapes is presented, combing visual perception with tangible interaction. The method is divided into three phases: two tangible interaction phases and one visual reconstruction. In the first tangible phase, a clay model is used to represent the raw shape, and the designer can change the shape intuitively with his hands. Then the raw shape is scanned into a digital volume model through a low cost vision system. In the last tangible phase, a desktop haptic device from SensAble is used to refine the scanned volume model and convert it into a surface model. A physical clay model and a virtual clay mode are all used in this method to deal with the main shape and the details respectively, and the vision system is used to bridge the two tangible phases. The vision reconstruction system is only made of a camera to acquire raw shape through shape from silhouettes method. All of the systems are installed on a single desktop, make it convenient for designers. The vision system details and a design example are presented in the papers.

  6. Visualizing the process of interaction in a 3D environment

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  7. 3-D AE visualization of bone-cement fatigue locations.

    PubMed

    Qi, G; Pujol, J; Fan, Z

    2000-11-01

    This study addresses the visualization of crack locations in bone-cement material using a three-dimensional acoustic emission source location technique. Computer software based on an earthquake location technique was developed to determine AE source locations and was used to investigate material cracks formed at the tip of a notch in bone cement. The computed locations show that the cracks form linear features with dimensions between 0.1 and 0.2 mm although larger linear features (almost 3.5 mm) also are present. There is a difference of about 2.5 mm between the average of the event locations, and the location of the tip of the notch is 2.5 mm, which may be due to the finite size of the sensors (5 mm in diameter).

  8. Image processing and 3D visualization in forensic pathologic examination

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1996-02-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

  9. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  10. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  11. Use of Google SketchUp to implement 3D spatio-temporal visualization

    NASA Astrophysics Data System (ADS)

    Li, Linhai; Qu, Lina; Ying, Shen; Liang, Dongdong; Hu, Zhenlong

    2009-10-01

    Geovisualization is an important means to understand the geographic features and phenomena. Urban space, especially buildings, keeps changing with social development. However, traditional 2D visualization can only represent the plane geometric description, which is unable to support 3D dynamic visualization. Only with 3D dynamic visualization can the buildings' spatial morphology be exhibited temporally, including buildings' creation, expansion, removing, etc. But these buildings' changes are impossible to be studied in traditional 2D and 3D static visualization systems. As a result, it becomes urgent to find an effective solution to implement 3D spatial-temporal visualization of buildings. Inspired by 2D spatial-temporal visualization methods, like snapshot and event-based spatio-temporal data model(ESTDM), we propose a new data model called Spatio-Temporal Page Model(STPM) and implement 3D spatial-temporal visualization in Google SketchUp based on STPM. This paper studies 3D visualization of real estate focusing on its spatio-temporal characteristics. First of all, 3D models are built for every temporal scenario by the Google SketchUp. And every Geo-object is identified by a unique and permanent ObjectID, the linkage of Geo-objects between different time spots. Then, each temporal scenario is represented as page. After having the page series, finally, it is possible to display its spatial-temporal changes and create an animation. Underlying this solution, we have built a prototype system on part of real estate data. It is proven that users are able to understand clearly the real estate's changes from our prototype system. Consequently, we believe our method for 3D spatial-temporal visualization definitely has many merits.

  12. The effect of sound on visual fidelity perception in stereoscopic 3-D.

    PubMed

    Rojas, David; Kapralos, Bill; Hogue, Andrew; Collins, Karen; Nacke, Lennart; Cristancho, Sayra; Conati, Cristina; Dubrowski, Adam

    2013-12-01

    Visual and auditory cues are important facilitators of user engagement in virtual environments and video games. Prior research supports the notion that our perception of visual fidelity (quality) is influenced by auditory stimuli. Understanding exactly how our perception of visual fidelity changes in the presence of multimodal stimuli can potentially impact the design of virtual environments, thus creating more engaging virtual worlds and scenarios. Stereoscopic 3-D display technology provides the users with additional visual information (depth into and out of the screen plane). There have been relatively few studies that have investigated the impact that auditory stimuli have on our perception of visual fidelity in the presence of stereoscopic 3-D. Building on previous work, we examine the effect of auditory stimuli on our perception of visual fidelity within a stereoscopic 3-D environment.

  13. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  14. Modelling honeybee visual guidance in a 3-D environment.

    PubMed

    Portelli, G; Serres, J; Ruffier, F; Franceschini, N

    2010-01-01

    In view of the behavioral findings published on bees during the last two decades, it was proposed to decipher the principles underlying bees' autopilot system, focusing in particular on these insects' use of the optic flow (OF). Based on computer-simulated experiments, we developed a vision-based autopilot that enables a "simulated bee" to travel along a tunnel, controlling both its speed and its clearance from the right wall, left wall, ground, and roof. The flying agent thus equipped enjoys three translational degrees of freedom on the surge (x), sway (y), and heave (z) axes, which are uncoupled. This visuo-motor control system, which is called ALIS (AutopiLot using an Insect based vision System), is a dual OF regulator consisting of two interdependent feedback loops, each of which has its own OF set-point. The experiments presented here showed that the simulated bee was able to navigate safely along a straight or tapered tunnel and to react appropriately to any untoward OF perturbations, such as those resulting from the occasional lack of texture on one wall or the tapering of the tunnel. The minimalistic visual system used here (involving only eight pixels) suffices to jointly control both the clearance from the four walls and the forward speed, without having to measure any speeds or distances. The OF sensors and the simple visuo-motor control system we have developed account well for the results of ethological studies performed on honeybees flying freely along straight and tapered corridors.

  15. Sector mapping method for 3D detached retina visualization.

    PubMed

    Zhai, Yi-Ran; Zhao, Yong; Zhong, Jie; Li, Ke; Lu, Cui-Xin; Zhang, Bing

    2016-10-01

    A new sphere-mapping algorithm called sector mapping is introduced to map sector images to the sphere of an eyeball. The proposed sector-mapping algorithm is evaluated and compared with the plane-mapping algorithm adopted in previous work. A simulation that maps an image of concentric circles to the sphere of the eyeball and an analysis of the difference in distance between neighboring points in a plane and sector were used to compare the two mapping algorithms. A three-dimensional model of a whole retina with clear retinal detachment was generated using the Visualization Toolkit software. A comparison of the mapping results shows that the central part of the retina near the optic disc is stretched and its edges are compressed when the plane-mapping algorithm is used. A better mapping result is obtained by the sector-mapping algorithm than by the plane-mapping algorithm in both the simulation results and real clinical retinal detachment three-dimensional reconstruction.

  16. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    ERIC Educational Resources Information Center

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  17. Floating autostereoscopic 3D display with multidimensional images for telesurgical visualization.

    PubMed

    Zhao, Dong; Ma, Longfei; Ma, Cong; Tang, Jie; Liao, Hongen

    2016-02-01

    We propose a combined floating autostereoscopic three-dimensional (3D) display approach for telesurgical visualization, which could reproduce live surgical scene in a realistic and intuitive manner. A polyhedron-shaped 3D display device is developed for spatially floating autostereoscopic 3D image. Integral videography (IV) technique is adopted to generate real-time 3D images. Combined two-dimensional (2D) and 3D displays are presented floatingly around the center of the display device through reflection of semitransparent mirrors. Intra-operative surgery information is fused and updated in the 3D display, so that telesurgical visualization could be enhanced remotely. The experimental results showed that our approach can achieve a combined floating autostereoscopic display that presents 2D and 3D fusion images. The glasses-free IV 3D display has full parallax and can be observed by multiple persons from surrounding areas at the same time. Furthermore, real-time surgical scene could be presented and updated in a realistic and intuitive visualization platform. It is shown that the proposed method is feasible for facilitating telesurgical visualization. The proposed floating autostereoscopic display device presents surgical information in an efficient form, so as to enhance operative cooperation and efficiency during operation. Combined presentation of imaging information is promising for medical applications.

  18. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  19. Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization

    PubMed Central

    Meagher, Kwyn A.; Doblack, Benjamin N.; Ramirez, Mercedes; Davila, Lilian P.

    2014-01-01

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications.  For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately.  To study the effect of local structure on the properties of these complex geometries one must develop realistic models.  To date, software packages are rather limited in creating atomistic helical models.  This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented.  The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix.  With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions.  The second method involves a more robust code which allows flexibility in modeling nanohelical structures.  This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models.  Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created.  An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material.  In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures.  One application of these methods is the recent study of nanohelices

  20. Remote transformation and local 3D reconstruction and visualization of biomedical data sets in Java3D

    NASA Astrophysics Data System (ADS)

    Pinnamaneni, Pujita; Saladi, Sagar; Meyer, Joerg

    2002-03-01

    Advanced medical imaging technologies have enabled biologists and other researchers in biomedicine, biochemistry and bio-informatics to gain better insight in complex, large-scale data sets. These datasets, which occupy large amounts of space, can no longer be stored on local hard drives. San Diego Supercomputer Center (SDSC) maintains a large data repository, called High Performance Storage System (HPSS), where large-scale biomedical data sets can be stored. These data sets must be transmitted over an open or closed network (Internet or Intranet) within a reasonable amount of time to make them accessible in an interactive fashion to the researchers all over the world. Our approach deals with extracting, compressing and transmitting these data sets using the Haar wavelets, over a low- to medium-bandwidth network. These compressed data sets are then transformed and reconstructed into a 3-D volume on the client side using texture mapping in Java3D. These data sets are handled using the Scalable Visualization Toolkits provided by the NPACI (National Partnership for Advanced Computational Infrastructure). Sub-volumes of the data sets are extracted to provide a detailed view of a particular region of interest (ROI). This application is being ported to C++ platform to obtain higher rendering speed and better performance but lacks platform independency.

  1. Quantitative fractography by digital image processing: NIH Image macro tools for stereo pair analysis and 3-D reconstruction.

    PubMed

    Hein, L R

    2001-10-01

    A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.

  2. Three-dimensional human computer interaction based on 3D widgets for medical data visualization

    NASA Astrophysics Data System (ADS)

    Xue, Jian; Tian, Jie; Zhao, Mingchang

    2005-04-01

    Three-dimensional human computer interaction plays an important role in 3-dimensional visualization. It is important for clinicians to accurately use and easily handle the result of medical data visualization in order to assist diagnosis and surgery simulation. A 3D human computer interaction software platform based on 3D widgets has been designed in traditional object-oriented fashion with some common design patterns and implemented by using ANSI C++, including all function modules and some practical widgets. A group of application examples are exhibited as well. The ultimate objective is to provide a flexible, reliable and extensible 3-D interaction platform for medical image processing and analyzing.

  3. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    SciTech Connect

    Kerr, J. ); Jones, G.L. )

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting and detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.

  4. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    SciTech Connect

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting and detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.

  5. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  6. Does visual fatigue from 3D displays affect autonomic regulation and heart rhythm?

    PubMed

    Park, S; Won, M J; Mun, S; Lee, E C; Whang, M

    2014-02-15

    Most investigations into the negative effects of viewing stereoscopic 3D content on human health have addressed 3D visual fatigue and visually induced motion sickness (VIMS). Very few, however, have looked into changes in autonomic balance and heart rhythm, which are homeostatic factors that ought to be taken into consideration when assessing the overall impact of 3D video viewing on human health. In this study, 30 participants were randomly assigned to two groups: one group watching a 2D video, (2D-group) and the other watching a 3D video (3D-group). The subjects in the 3D-group showed significantly increased heart rates (HR), indicating arousal, and an increased VLF/HF (Very Low Frequency/High Frequency) ratio (a measure of autonomic balance), compared to those in the 2D-group, indicating that autonomic balance was not stable in the 3D-group. Additionally, a more disordered heart rhythm pattern and increasing heart rate (as determined by the R-peak to R-peak (RR) interval) was observed among subjects in the 3D-group compared to subjects in the 2D-group, further indicating that 3D viewing induces lasting activation of the sympathetic nervous system and interrupts autonomic balance.

  7. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    NASA Astrophysics Data System (ADS)

    Babu, Sabarish; Liao, Pao-Chuan; Shin, Min C.; Tsap, Leonid V.

    2006-12-01

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  8. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    SciTech Connect

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  9. Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

    The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

  10. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays.

    PubMed

    Kim, Chi Jung; Park, Sangin; Won, Myeung Ju; Whang, Mincheol; Lee, Eui Chul

    2013-09-26

    Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.

  11. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  12. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  13. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  14. The impact of stereo 3D sports TV broadcasts on user's depth perception and spatial presence experience

    NASA Astrophysics Data System (ADS)

    Weigelt, K.; Wiemeyer, J.

    2014-03-01

    This work examines the impact of content and presentation parameters in 2D versus 3D on depth perception and spatial presence, and provides guidelines for stereoscopic content development for 3D sports TV broadcasts and cognate subjects. Under consideration of depth perception and spatial presence experience, a preliminary study with 8 participants (sports: soccer and boxing) and a main study with 31 participants (sports: soccer and BMX-Miniramp) were performed. The dimension (2D vs. 3D) and camera position (near vs. far) were manipulated for soccer and boxing. In addition for soccer, the field of view (small vs. large) was examined. Moreover, the direction of motion (horizontal vs. depth) was considered for BMX-Miniramp. Subjective assessments, behavioural tests and qualitative interviews were implemented. The results confirm a strong effect of 3D on both depth perception and spatial presence experience as well as selective influences of camera distance and field of view. The results can improve understanding of the perception and experience of 3D TV as a medium. Finally, recommendations are derived on how to use various 3D sports ideally as content for TV broadcasts.

  15. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  16. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  17. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  18. Pattern identification or 3D visualization? How best to learn topographic map comprehension

    NASA Astrophysics Data System (ADS)

    Atit, Kinnari

    Science, Technology, Engineering, and Mathematics (STEM) experts employ many representations that novices find hard to use because they require a critical STEM skill, interpreting two-dimensional (2D) diagrams that represent three-dimensional (3D) information. The current research focuses on learning to interpret topographic maps. Understanding topographic maps requires knowledge of how to interpret the conventions of contour lines, and skill in visualizing that information in 3D (e.g. shape of the terrain). Novices find both tasks difficult. The present study compared two interventions designed to facilitate understanding for topographic maps to minimal text-only instruction. The 3D Visualization group received instruction using 3D gestures and models to help visualize three topographic forms. The Pattern Identification group received instruction using pointing and tracing gestures to help identify the contour patterns associated with the three topographic forms. The Text-based Instruction group received only written instruction explaining topographic maps. All participants then completed a measure of topographic map use. The Pattern Identification group performed better on the map use measure than participants in the Text-based Instruction group, but no significant difference was found between the 3D Visualization group and the other two groups. These results suggest that learning to identify meaningful contour patterns is an effective strategy for learning how to comprehend topographic maps. Future research should address if learning strategies for how to interpret the information represented on a diagram (e.g. identify patterns in the contour lines), before trying to visualize the information in 3D (e.g. visualize the 3D structure of the terrain), also facilitates students' comprehension of other similar types of diagrams.

  19. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  20. Comparing and visualizing titanium implant integration in rat bone using 2D and 3D techniques.

    PubMed

    Arvidsson, Anna; Sarve, Hamid; Johansson, Carina B

    2015-01-01

    The aim was to compare the osseointegration of grit-blasted implants with and without a hydrogen fluoride treatment in rat tibia and femur, and to visualize bone formation using state-of-the-art 3D visualization techniques. Grit-blasted implants were inserted in femur and tibia of 10 Sprague-Dawley rats (4 implants/rat). Four weeks after insertion, bone implant samples were retrieved. Selected samples were imaged in 3D using Synchrotron Radiation-based μCT (SRμCT). The 3D data was quantified and visualized using two novel visualization techniques, thread fly-through and 2D unfolding. All samples were processed to cut and ground sections and 2D histomorphometrical comparisons of bone implant contact (BIC), bone area (BA), and mirror image area (MI) were performed. BA values were statistically significantly higher for test implants than controls (p < 0.05), but BIC and MI data did not differ significantly. Thus, the results partly indicate improved bone formation at blasted and hydrogen fluoride treated implants, compared to blasted implants. The 3D analysis was a valuable complement to 2D analysis, facilitating improved visualization. However, further studies are required to evaluate aspects of 3D quantitative techniques, with relation to light microscopy that traditionally is used for osseointegration studies.

  1. What Is the Nature of EUV Waves? First STEREO 3D Observations and Comparison with Theoretical Models

    NASA Astrophysics Data System (ADS)

    Patsourakos, S.; Vourlidas, A.; Wang, Y. M.; Stenborg, G.; Thernisien, A.

    2009-10-01

    One of the major discoveries of the Extreme ultraviolet Imaging Telescope (EIT) on SOHO was the intensity enhancements propagating over a large fraction of the solar surface. The physical origin(s) of the so-called EIT waves is still strongly debated with either wave (primarily fast-mode MHD waves) or nonwave (pseudo-wave) interpretations. The difficulty in understanding the nature of EUV waves lies in the limitations of the EIT observations that have been used almost exclusively for their study. They suffer from low cadence and single temperature and viewpoint coverage. These limitations are largely overcome by the SECCHI/EUVI observations onboard the STEREO mission. The EUVI telescopes provide high-cadence, simultaneous multitemperature coverage and two well-separated viewpoints. We present here the first detailed analysis of an EUV wave observed by the EUVI disk imagers on 7 December 2007 when the STEREO spacecraft separation was ≈ 45°. Both a small flare and a coronal mass ejection (CME) were associated with the wave. We also offer the first comprehensive comparison of the various wave interpretations against the observations. Our major findings are as follows: (1) High-cadence (2.5-minute) 171 Å images showed a strong association between expanding loops and the wave onset and significant differences in the wave appearance between the two STEREO viewpoints during its early stages; these differences largely disappeared later; (2) the wave appears at the active region periphery when an abrupt disappearance of the expanding loops occurs within an interval of 2.5 minutes; (3) almost simultaneous images at different temperatures showed that the wave was most visible in the 1 - 2 MK range and almost invisible in chromospheric/transition region temperatures; (4) triangulations of the wave indicate it was rather low lying (≈ 90 Mm above the surface); (5) forward-fitting of the corresponding CME as seen by the COR1 coronagraphs showed that the projection of the

  2. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  3. Effects of alternate pictorial pathway displays and stereo 3-D presentation on simulated transport landing approach performance

    NASA Astrophysics Data System (ADS)

    Busquets, Anthony M.; Parrish, Russell V.; Williams, Steven P.

    1991-08-01

    `Pathway-in-the-sky'' flight display formats appear to offer exceptional path-control precision for future transport operational environments requiring complex-path approaches. With the conversion from the present instrument landing system (ILS) to the microwave landing system (MLS) within the National Airspace System, complex-path approaches could be used for commercial transport operations to address airport capacity issues. Therefore, the application of `pathway-in-the-sky'' formats to commercial transport operations is being evaluated at various flight display research laboratories. The introduction of true depth cues via stereopsis techniques offers a means of further enhancing these displays. The paper describes research conducted to determine the effectiveness of two candidate pathway formats for landing approach and to investigate the effect of their presentation in stereo versus nonstereo display environments. A real-time piloted simulation experiment comparing performance across these factors in a transport landing-approach task is discussed.

  4. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  5. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  6. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  8. [Visualization of the lower cranial nerves by 3D-FIESTA].

    PubMed

    Okumura, Yusuke; Suzuki, Masayuki; Takemura, Akihiro; Tsujii, Hideo; Kawahara, Kazuhiro; Matsuura, Yukihiro; Takada, Tadanori

    2005-02-20

    MR cisternography has been introduced for use in neuroradiology. This method is capable of visualizing tiny structures such as blood vessels and cranial nerves in the cerebrospinal fluid (CSF) space because of its superior contrast resolution. The cranial nerves and small vessels are shown as structures of low intensity surrounded by marked hyperintensity of the CSF. In the present study, we evaluated visualization of the lower cranial nerves (glossopharyngeal, vagus, and accessory) by the three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) sequence and multiplanar reformation (MPR) technique. The subjects were 8 men and 3 women, ranging in age from 21 to 76 years (average, 54 years). We examined the visualization of a total of 66 nerves in 11 subjects by 3D-FIESTA. The results were classified into four categories ranging from good visualization to non-visualization. In all cases, all glossopharyngeal and vagus nerves were identified to some extent, while accessory nerves were visualized either partially or entirely in only 16 cases. The total visualization rate was about 91%. In conclusion, 3D-FIESTA may be a useful method for visualization of the lower cranial nerves.

  9. The role of visualization and 3-D printing in biological data mining.

    PubMed

    Weiss, Talia L; Zieselman, Amanda; Hill, Douglas P; Diamond, Solomon G; Shen, Li; Saykin, Andrew J; Moore, Jason H

    2015-01-01

    Biological data mining is a powerful tool that can provide a wealth of information about patterns of genetic and genomic biomarkers of health and disease. A potential disadvantage of data mining is volume and complexity of the results that can often be overwhelming. It is our working hypothesis that visualization methods can greatly enhance our ability to make sense of data mining results. More specifically, we propose that 3-D printing has an important role to play as a visualization technology in biological data mining. We provide here a brief review of 3-D printing along with a case study to illustrate how it might be used in a research setting. We present as a case study a genetic interaction network associated with grey matter density, an endophenotype for late onset Alzheimer's disease, as a physical model constructed with a 3-D printer. The synergy or interaction effects of multiple genetic variants were represented through a color gradient of the physical connections between nodes. The digital gene-gene interaction network was then 3-D printed to generate a physical network model. The physical 3-D gene-gene interaction network provided an easily manipulated, intuitive and creative way to visualize the synergistic relationships between the genetic variants and grey matter density in patients with late onset Alzheimer's disease. We discuss the advantages and disadvantages of this novel method of biological data mining visualization.

  10. Study of objective evaluation indicators of 3D visual fatigue based on RDS related tasks

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Liu, Yue; Zou, Bochao; Wang, Yongtian; Cheng, Dewen

    2015-03-01

    Three dimensional (3D) displays have witnessed rapid progress in recent years because of its highly realistic sensation and sense of presence to humanist users. However, the comfort issues of 3D display are often reported and thus restrict its wide applications. In order to study the objective evaluation indicators associated with 3D visual fatigue, an experiment is designed in which subjects are required to accomplish a task realized with random dot stereogram (RDS). The aim of designing the task is to induce 3D visual fatigue of subjects and exclude the impacts of monocular depth cues. The visual acuity, critical flicker frequency (CFF), reaction time and correct rate of subjects during the experiment are recorded and analyzed. Correlation of the experimental data with the subjective evaluation scores is studied to find which indicator is closely related to 3D visual fatigue. Analysis of the experimental data shows that the trends of the correct rate are in line with the result of subjective evaluation.

  11. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  12. Empower PACS with content-based queries and 3D image visualization

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Hoo, Kent S., Jr.; Huang, H. K.

    1996-05-01

    Current generation of picture archiving and communication systems (PACS) lacks the capabilities to permit content-based searches to be made on image data and to visualize and render 3D image data in a cost effective manner. The purpose of this research project is to investigate a framework that will combine the storage and communication components of PACS with the power of content-based image indexing and 3D visualization. This presentation will describe the integrated architecture and tools of our experimental system with examples taken from applications of neurological surgical planning and assessment of pediatric bone age.

  13. Versatile annotation and publication quality visualization of protein complexes using POLYVIEW-3D.

    PubMed

    Porollo, Aleksey; Meller, Jaroslaw

    2007-08-29

    Macromolecular visualization as well as automated structural and functional annotation tools play an increasingly important role in the post-genomic era, contributing significantly towards the understanding of molecular systems and processes. For example, three dimensional (3D) models help in exploring protein active sites and functional hot spots that can be targeted in drug design. Automated annotation and visualization pipelines can also reveal other functionally important attributes of macromolecules. These goals are dependent on the availability of advanced tools that integrate better the existing databases, annotation servers and other resources with state-of-the-art rendering programs. We present a new tool for protein structure analysis, with the focus on annotation and visualization of protein complexes, which is an extension of our previously developed POLYVIEW web server. By integrating the web technology with state-of-the-art software for macromolecular visualization, such as the PyMol program, POLYVIEW-3D enables combining versatile structural and functional annotations with a simple web-based interface for creating publication quality structure rendering, as well as animated images for Powerpoint, web sites and other electronic resources. The service is platform independent and no plug-ins are required. Several examples of how POLYVIEW-3D can be used for structural and functional analysis in the context of protein-protein interactions are presented to illustrate the available annotation options. POLYVIEW-3D server features the PyMol image rendering that provides detailed and high quality presentation of macromolecular structures, with an easy to use web-based interface. POLYVIEW-3D also provides a wide array of options for automated structural and functional analysis of proteins and their complexes. Thus, the POLYVIEW-3D server may become an important resource for researches and educators in the fields of protein science and structural bioinformatics

  14. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  15. 3D functional ultrasound imaging of the cerebral visual system in rodents.

    PubMed

    Gesnik, Marc; Blaize, Kevin; Deffieux, Thomas; Gennisson, Jean-Luc; Sahel, José-Alain; Fink, Mathias; Picaud, Serge; Tanter, Mickaël

    2017-02-03

    3D functional imaging of the whole brain activity during visual task is a challenging task in rodents due to the complex tri-dimensional shape of involved brain regions and the fine spatial and temporal resolutions required to reveal the visual tract. By coupling functional ultrasound (fUS) imaging with a translational motorized stage and an episodic visual stimulation device, we managed to accurately map and to recover the activity of the visual cortices, the Superior Colliculus (SC) and the Lateral Geniculate Nuclei (LGN) in 3D. Cerebral Blood Volume (CBV) responses during visual stimuli were found to be highly correlated with the visual stimulus time profile in visual cortices (r=0.6), SC (r=0.7) and LGN (r=0.7). These responses were found dependent on flickering frequency and contrast, and optimal stimulus parameters for largest CBV increases were obtained. In particular, increasing the flickering frequency higher than 7Hz revealed a decrease of visual cortices response while the SC response was preserved. Finally, cross-correlation between CBV signals exhibited significant delays (d=0.35s +/-0.1s) between blood volume response in SC and visual cortices in response to our visual stimulus. These results emphasize the interest of fUS imaging as a whole brain neuroimaging modality for brain vision studies in rodent models.

  16. Visualization of large scale geologically related data in virtual 3D scenes with OpenGL

    NASA Astrophysics Data System (ADS)

    Seng, Dewen; Liang, Xi; Wang, Hongxia; Yue, Guoying

    2007-11-01

    This paper demonstrates a method for three-dimensional (3D) reconstruction and visualization of large scale multidimensional surficial, geological and mine planning data with the programmable visualization environment OpenGL. A simulation system developed by the authors is presented for importing, filtering and visualizing of multidimensional geologically related data. The approach for the visual simulation of complicated mining engineering environment implemented in the system is described in detail. Aspects like presentations of multidimensional data with spatial dependence, navigation in the surficial and geological frame of reference and in time, interaction techniques are presented. The system supports real 3D landscape representations. Furthermore, the system provides many visualization methods for rendering multidimensional data within virtual 3D scenes and combines them with several navigation techniques. Real data derived from an iron mine in Wuhan City of China demonstrates the effectiveness and efficiency of the system. A case study with the results and benefits achieved by using real 3D representations and navigations of the system is given.

  17. Converging Horizons: Collaborative 2/3D Visualization Tools for Astronomy

    NASA Astrophysics Data System (ADS)

    Plante, Raymond L.; Rajlich, Paul J.; Pietrowitz, Steve; Xie, Wei; Qamar, Asif

    We present a variety of Java--based visualization tools for collaborative astronomical research which integrate a number of diverse technologies currently under development at NCSA. The NCSA Horizon Image Data Browser is a Java package for building Java applets that interact with 2D visualizations of multi-dimensional image data. The latest release of the package, Horizon 2.0, features a number of new capabilities including transparent support of collaboration and flexible support for multidimensional arrays. Astro3Vis is a pair of tools for creating and viewing 3D visualizations of astronomical images in a Web environment. The VRML Server is used to create 3D visualizations of FITS image cubes from the NCSA Astronomy Digital Image Library (ADIL) in VRML format. While any VRML 2.0 browser can be used to view the visualizations, interactivity is enhanced when the VRML Server is used in conjunction with a Astro3D, a Java VRML viewer with specialized features supporting editing and collaboration. Together, the two tools that make up Astro3Vis provide an environment for creating 3D figures that can be published in electronic journals.

  18. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    NASA Astrophysics Data System (ADS)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  19. An Approach to 3d Digital Modeling of Surfaces with Poor Texture by Range Imaging Techniques. `SHAPE from Stereo' VS. `SHAPE from Silhouette' in Digitizing Jorge Oteiza's Sculptures

    NASA Astrophysics Data System (ADS)

    García Fernández, J.; Álvaro Tordesillas, A.; Barba, S.

    2015-02-01

    Despite eminent development of digital range imaging techniques, difficulties persist in the virtualization of objects with poor radiometric information, in other words, objects consisting of homogeneous colours (totally white, black, etc.), repetitive patterns, translucence, or materials with specular reflection. This is the case for much of the Jorge Oteiza's works, particularly in the sculpture collection of the Museo Fundación Jorge Oteiza (Navarra, Spain). The present study intend to analyse and asses the performance of two digital 3D-modeling methods based on imaging techniques, facing cultural heritage in singular cases, determined by radiometric characteristics as mentioned: Shape from Silhouette and Shape from Stereo. On the other hand, the text proposes the definition of a documentation workflow and presents the results of its application in the collection of sculptures created by Oteiza.

  20. MRI depiction and 3D visualization of three anterior cruciate ligament bundles.

    PubMed

    Otsubo, H; Akatsuka, Y; Takashima, H; Suzuki, T; Suzuki, D; Kamiya, T; Ikeda, Y; Matsumura, T; Yamashita, T; Shino, K

    2017-03-01

    The anterior cruciate ligament (ACL) is divided into three fiber bundles (AM-M: anteromedial-medial, AM-L: anteromedial-lateral, PL: posterolateral). We attempted to depict the three bundles of the human ACL on MRI images and to obtain 3-dimensional visualization of them. Twenty-four knees of healthy volunteers (14 males, 10 females) were scanned by 3T-MRI using the fat suppression 3D coherent oscillatory state acquisition for the manipulation of imaging contrast (FS 3D-COSMIC). The scanned images were reconstructed after the isotropic voxel data, which allows the images to be reconstructed in any plane, was acquired. We conducted statistical examination on the identification rate of the three ACL bundles by 2D planes. Segmentation and 3D visualization of the fiber bundles using volume rendering were performed. The triple-bundle ACL was best depicted in the oblique axial plane. While the AM-M and AM-L bundles were clearly depicted in all cases, the PL bundle was not clearly visualized in two knees (8%). Therefore, the three ACL bundles were depicted in 22 knees (92%). The results of 3D visualization of the fiber arrangement agreed well with macroscopic findings of previous anatomical studies. 3T-MRI and the isotropic voxel data from FS 3D-COSMIC made it possible to demonstrate the identifiable depiction of three ACL bundles in nearly all cases. 3D visualization of the bundles could be a useful tool to understand the ACL fiber arrangement. Clin. Anat. 30:276-283, 2017. 2016 The Authors. Clinical Anatomy published by Wiley Periodicals, Inc. on behalf of American Association of Clinical Anatomists.

  1. Stereo-based visual localization without triangulation for unmanned robotics platform

    NASA Astrophysics Data System (ADS)

    Volkov, Alexey; Ershov, Egor; Gladilin, Sergey; Nikolaev, Dmitry

    2017-02-01

    In this paper we propose a novel method for localization based on matching two stereo images. It is based on minimizing the sum of square distances between each 3D point and four corresponding 3D rays. The method shows good results for practical localization purposes. Moreover it is robust to the presence of feature point correspondences with zero disparity, which is usually a problem for classical methods. The algorithm is tested in comparison to the classical method. It has linear complexity with respect to feature point correspondence number.

  2. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  3. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  4. Memory and visual search in naturalistic 2D and 3D environments.

    PubMed

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.

  5. Trans3D: a free tool for dynamical visualization of EEG activity transmission in the brain.

    PubMed

    Blinowski, Grzegorz; Kamiński, Maciej; Wawer, Dariusz

    2014-08-01

    The problem of functional connectivity in the brain is in the focus of attention nowadays, since it is crucial for understanding information processing in the brain. A large repertoire of measures of connectivity have been devised, some of them being capable of estimating time-varying directed connectivity. Hence, there is a need for a dedicated software tool for visualizing the propagation of electrical activity in the brain. To this aim, the Trans3D application was developed. It is an open access tool based on widely available libraries and supporting both Windows XP/Vista/7(™), Linux and Mac environments. Trans3D can create animations of activity propagation between electrodes/sensors, which can be placed by the user on the scalp/cortex of a 3D model of the head. Various interactive graphic functions for manipulating and visualizing components of the 3D model and input data are available. An application of the Trans3D tool has helped to elucidate the dynamics of the phenomena of information processing in motor and cognitive tasks, which otherwise would have been very difficult to observe. Trans3D is available at: http://www.eeg.pl/.

  6. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  7. a Repository of Information Visualization Techniques to Support the Design of 3d Virtual City Models

    NASA Astrophysics Data System (ADS)

    Métral, C.; Ghoula, N.; Silva, V.; Falquet, G.

    2013-09-01

    Virtual 3D city models are used for different applications such as urban planning, navigation, pedestrian behaviour, historical information, and disaster management. These applications require rich information models that associate urban objects not only with their geometric properties but also with other types of information. When designing such models the choice of visualization techniques is far from trivial because the city objects must be displayed together with additional information, such as historical facts, planning projects, pollutant concentration, noise level, etc. Finding relevant techniques depends on a set of criteria such as the type of information, but also on the tasks that will be performed and the associated context. Furthermore, a technique that is relevant when used in isolation may generate visual incompatibilities when used in conjunction with another one. We have defined a model for the representation of information visualization techniques in 3D city models. We have implemented this model in the form of an ontology and a knowledge base of techniques currently used in 3D city models or 3D GIS. The goal of such an approach is to provide a knowledge repository to support the design of 3D virtual city models in which non-geometric information must be presented. In this paper we describe the model and the ontology of information visualization techniques that we designed. We also show how the associated knowledge base can be used for the selection of visualization techniques depending on different criteria including task and context, and for the detection of visual incompatibilities between techniques when used in the same scene.

  8. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  9. Sci—Sat AM: Stereo — 01: 3D Pre-treatment Dose Verification for Stereotactic Body Radiation Therapy Patients

    SciTech Connect

    Asuni, G; Beek, T van; Van Utyven, E; McCowan, P; McCurdy, B.M.C.

    2014-08-15

    Radical treatment techniques such as stereotactic body radiation therapy (SBRT) are becoming popular and they involve delivery of large doses in fewer fractions. Due to this feature of SBRT, a high-resolution, pre-treatment dose verification method that makes use of a 3D patient representation would be appropriate. Such a technique will provide additional information about dose delivered to the target volume(s) and organs-at-risk (OARs) in the patient volume compared to 2D verification methods. In this work, we investigate an electronic portal imaging device (EPID) based pre-treatment QA method which provides an accurate reconstruction of the 3D-dose distribution in the patient model. Customized patient plans are delivered ‘in air’ and the portal images are collected using the EPID in cine mode. The images are then analysed to determine an estimate of the incident energy fluence. This is then passed to a collapsed-cone convolution dose algorithm which reconstructs a 3D patient dose estimate on the CT imaging dataset. To date, the method has been applied to 5 SBRT patient plans. Reconstructed doses were compared to those calculated by the TPS. Reconstructed mean doses were mostly within 3% of those in the TPS. DVHs of target volumes and OARs compared well. The Chi pass rates using 3%/3mm in the high dose region are greater than 97% in all cases. These initial results demonstrate clinical feasibility and utility of a robust, efficient, effective and convenient pre-treatment QA method using EPID. Research sponsored in part by Varian Medical Systems.

  10. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  11. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  12. SlicerAstro: A 3-D interactive visual analytics tool for HI data

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Fillion-Robin, J. C.; Yu, L.

    2017-04-01

    SKA precursors are capable of detecting hundreds of galaxies in HI in a single 12 h pointing. In deeper surveys one will probe more easily faint HI structures, typically located in the vicinity of galaxies, such as tails, filaments, and extraplanar gas. The importance of interactive visualization in data exploration has been demonstrated by the wide use of tools (e.g. Karma, Casaviewer, VISIONS) that help users to receive immediate feedback when manipulating the data. We have developed SlicerAstro, a 3-D interactive viewer with new analysis capabilities, based on traditional 2-D input/output hardware. These capabilities enhance the data inspection, allowing faster analysis of complex sources than with traditional tools. SlicerAstro is an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing. We demonstrate the capabilities of the current stable binary release of SlicerAstro, which offers the following features: (i) handling of FITS files and astronomical coordinate systems; (ii) coupled 2-D/3-D visualization; (iii) interactive filtering; (iv) interactive 3-D masking; (v) and interactive 3-D modeling. In addition, SlicerAstro has been designed with a strong, stable and modular C++ core, and its classes are also accessible via Python scripting, allowing great flexibility for user-customized visualization and analysis tasks.

  13. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  14. Role of Interaction in Enhancing the Epistemic Utility of 3D Mathematical Visualizations

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2010-01-01

    Many epistemic activities, such as spatial reasoning, sense-making, problem solving, and learning, are information-based. In the context of epistemic activities involving mathematical information, learners often use interactive 3D mathematical visualizations (MVs). However, performing such activities is not always easy. Although it is generally…

  15. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  16. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  17. Network-based visualization of 3D landscapes and city models.

    PubMed

    Royan, Jérôme; Gioia, Patrick; Cavagna, Romain; Bouville, Christian

    2007-01-01

    To improve the visualization of large 3D landscapes and city models in a network environment, the authors use two different types of hierarchical level-of-detail models for terrain and groups of buildings. They also leverage the models to implement progressive streaming in both client-server and peer-to-peer network architectures.

  18. 2D but not 3D: pictorial-depth deficits in a case of visual agnosia.

    PubMed

    Turnbull, Oliver H; Driver, Jon; McCarthy, Rosaleen A

    2004-01-01

    Patients with visual agnosia exhibit acquired impairments in visual object recognition, that may or may not involve deficits in low-level perceptual abilities. Here we report a case (patient DM) who after head injury presented with object-recognition deficits. He still appears able to extract 2D information from the visual world in a relatively intact manner; but his ability to extract pictorial information about 3D object-structure is greatly compromised. His copying of line drawings is relatively good, and he is accurate and shows apparently normal mental rotation when matching or judging objects tilted in the picture-plane. But he performs poorly on a variety of tasks requiring 3D representations to be derived from 2D stimuli, including: performing mental rotation in depth, rather than in the picture-plane; judging the relative depth of two regions depicted in line-drawings of objects; and deciding whether a line-drawing represents an object that is 'impossible' in 3D. Interestingly, DM failed to show several visual illusions experienced by normals (Muller-Lyer and Ponzo), that some authors have attributed to pictorial depth cues. Taken together, these findings indicate a deficit in achieving 3D intepretations of objects from 2D pictorial cues, that may contribute to object-recognition problems in agnosia.

  19. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  20. Effects of 3-D Visualization of Groundwater Modeling for Water Resource Decision Making

    NASA Astrophysics Data System (ADS)

    Block, J. L.; Arrowsmith, R.

    2006-12-01

    The rise of 3-D visualization hardware and software technology provides important opportunities to advance scientific and policy research. Although the petroleum industry has used immersive 3-D technology since the early 1990's for the visualization of geologic data among experts, there has been little use of this technology for decision making. The Decision Theater at ASU is a new facility using immersive visualization technology designed to combine scientific research at the university with policy decision making in the community. I document a case study in the use of 3-D immersive technology for water resource management in Arizona. Since the turn of the 20th century, natural hydrologic processes in the greater Phoenix region (Salt River Valley) have been shut down via the construction of dams, canals, wells, water treatment plants, and recharge facilities. Water from rivers that once naturally recharged the groundwater aquifer have thus been diverted while continuing groundwater outflow from wells has drawn the aquifer down hundreds of feet. MODFLOW is used to simulate groundwater response to the different water management decisions which impact the artificial and natural inflow and outflow. The East Valley Water Forum, a partnership of water providers east of Phoenix, used the 3-D capabilities of the Decision Theater to build visualizations of the East Salt River Valley groundwater system based on MODFLOW outputs to aid the design of a regional groundwater management plan. The resulting visualizations are now being integrated into policy decisions about long term water management. I address challenges in visualizing scientific information for policy making and highlight the roles of policy actors, specifically geologists, computer scientists, and political decision makers, involved in designing the visualizations. The results show that policy actors respond differently to the 3-D visualization techniques based on their experience, background, and objectives

  1. Measurement of in vitro and in vivo stent geometry and deformation by means of 3D imaging and stereo-photogrammetry.

    PubMed

    Zwierzak, Iwona; Cosentino, Daria; Narracott, Andrew J; Bonhoeffer, Philipp; Diaz, Vanessa; Fenner, John W; Schievano, Silvia

    2014-12-01

    To quantify variability of in vitro and in vivo measurement of 3D device geometry using 3D and biplanar imaging. Comparison of stent reconstruction is reported for in vitro coronary stent deployment (using micro-CT and optical stereo-photogrammetry) and in vivo pulmonary valve stent deformation (using 4DCT and biplanar fluoroscopy). Coronary stent strut length and inter-strut angle were compared in the fully deployed configuration. Local (inter-strut angle) and global (dog-boning ratio) measures of stent deformation were reported during stent deployment. Pulmonary valve stent geometry was assessed throughout the cardiac cycle by reconstruction of stent geometry and measurement of stent diameter. Good agreement was obtained between methods for assessment of coronary stent geometry with maximum disagreement of +/- 0.03 mm (length) and +/- 3 degrees (angle). The stent underwent large, non-uniform, local deformations during balloon inflation, which did not always correlate with changes in stent diameter. Three-dimensional reconstruction of the pulmonary valve stent was feasible for all frames of the fluoroscopy and for 4DCT images, with good correlation between the diameters calculated from the two methods. The largest compression of the stent during the cardiac cycle was 6.98% measured from fluoroscopy and 7.92% from 4DCT, both in the most distal ring. Quantitative assessment of stent geometry reconstructed from biplanar imaging methods in vitro and in vivo has shown good agreement with geometry reconstructed from 3D techniques. As a result of their short image acquisition time, biplanar methods may have significant advantages in the measurement of dynamic 3D stent deformation.

  2. 3D surface reconstruction and visualization of the Drosophila wing imaginal disc at cellular resolution

    NASA Astrophysics Data System (ADS)

    Bai, Linge; Widmann, Thomas; Jülicher, Frank; Dahmann, Christian; Breen, David

    2013-01-01

    Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues' apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.

  3. The key techniques of 3D visualization of oceanic temperature field

    NASA Astrophysics Data System (ADS)

    Guo, Jia; Tian, Zhen; Cheng, Fang

    2007-06-01

    Visualization is an important means of understanding and explaining the natural phenomena. The visualization of ocean can help us understand and utilize the undersea world. As we know, the ocean is a real three-dimensional space, so the visualization includes not only the simulation of interface terrain (such as sea water surface, sea bottom, etc.) but also the hydrographic feature (such as salinity, temperature, pressure, current directions etc.). In this paper, taking the temperature field in the sea as the example, we discussed the visualization of data of space-fulfilled field from a viewpoint located in the field. We analyzed the acquisition and interpolation of 3-D oceanic data in section 2, proposed an Octree model in section 3, introduced visualization in scientific computing and implemented temperature field visualization based on volume rendering in section 4. Lastly, some conclusions are given in section 5.

  4. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  5. Depth cues in human visual perception and their realization in 3D displays

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Häussler, Ralf; Fütterer, Gerald; Leister, Norbert

    2010-04-01

    Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.

  6. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  7. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Data Analysis and Visualization and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  8. Visualizing Science Dissections in 3D: Contextualizing Student Responses to Multidimensional Learning Materials in Science Dissections

    NASA Astrophysics Data System (ADS)

    Walker, Robin Annette

    A series of dissection tasks was developed in this mixed-methods study of student self-explanations of their learning using actual and virtual multidimensional science dissections and visuo-spatial instruction. Thirty-five seventh-grade students from a science classroom (N = 20 Female/15 Male, Age =13 years) were assigned to three dissection environments instructing them to: (a) construct static paper designs of frogs, (b) perform active dissections with formaldehyde specimens, and (c) engage with interactive 3D frog visualizations and virtual simulations. This multi-methods analysis of student engagement with anchored dissection materials found learning gains on labeling exercises and lab assessments among most students. Data revealed that students who correctly utilized multimedia text and diagrams, individually and collaboratively, manipulated 3D tools more effectively and were better able to self-explain and complete their dissection work. Student questionnaire responses corroborated that they preferred learning how to dissect a frog using 3D multimedia instruction. The data were used to discuss the impact of 3D technologies, programs, and activities on student learning, spatial reasoning, and their interest in science. Implications were drawn regarding how to best integrate 3D visualizations into science curricula as innovative learning options for students, as instructional alternatives for teachers, and as mandated dissection choices for those who object to physical dissections in schools.

  9. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE PAGES

    Roussos, Constantinos C.; Swingler, Jonathan

    2017-01-11

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  10. A 3D contact analysis approach for the visualization of the electrical contact asperities

    PubMed Central

    Swingler, Jonathan

    2017-01-01

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a ‘‘3D Contact Map’’ and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approach has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation. PMID:28105383

  11. Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization.

    PubMed

    Sato, Y; Nakamoto, M; Tamaki, Y; Sasama, T; Sakita, I; Nakajima, Y; Monden, M; Tamura, S

    1998-10-01

    This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data.

  12. GPU-accelerated 3D mipmap for real-time visualization of ultrasound volume data.

    PubMed

    Kwon, Koojoo; Lee, Eun-Seok; Shin, Byeong-Seok

    2013-10-01

    Ultrasound volume rendering is an efficient method for visualizing the shape of fetuses in obstetrics and gynecology. However, in order to obtain high-quality ultrasound volume rendering, noise removal and coordinates conversion are essential prerequisites. Ultrasound data needs to undergo a noise filtering process; otherwise, artifacts and speckle noise cause quality degradation in the final images. Several two-dimensional (2D) noise filtering methods have been used to reduce this noise. However, these 2D filtering methods ignore relevant information in-between adjacent 2D-scanned images. Although three-dimensional (3D) noise filtering methods are used, they require more processing time than 2D-based methods. In addition, the sampling position in the ultrasonic volume rendering process has to be transformed between conical ultrasound coordinates and Cartesian coordinates. We propose a 3D-mipmap-based noise reduction method that uses graphics hardware, as a typical 3D mipmap requires less time to be generated and less storage capacity. In our method, we compare the density values of the corresponding points on consecutive mipmap levels and find the noise area using the difference in the density values. We also provide a noise detector for adaptively selecting the mipmap level using the difference of two mipmap levels. Our method can visualize 3D ultrasound data in real time with 3D noise filtering.

  13. A high-level 3D visualization API for Java and ImageJ

    PubMed Central

    2010-01-01

    Background Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de. PMID:20492697

  14. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  15. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool

  16. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  17. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  18. MEVA - An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices

    PubMed Central

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data

  19. Unified framework for generation of 3D web visualization for mechatronic systems

    NASA Astrophysics Data System (ADS)

    Severa, O.; Goubej, M.; Konigsmarkova, J.

    2015-11-01

    The paper deals with development of a unified framework for generation of 3D visualizations of complex mechatronic systems. It provides a high-fidelity representation of executed motion by allowing direct employment of a machine geometry model acquired from a CAD system. Open-architecture multi-platform solution based on latest web standards is achieved by utilizing a web browser as a final 3D renderer. The results are applicable both for simulations and development of real-time human machine interfaces. Case study of autonomous underwater vehicle control is provided to demonstrate the applicability of the proposed approach.

  20. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  1. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  2. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  3. RUNTHRU6.0. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    Janucik, F.X.; Ross, D.M.; Sischo, K.F.

    1997-01-01

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  4. Smartphone as a Remote Touchpad to Facilitate Visualization of 3D Cerebral Angiograms during Aneurysm Surgery.

    PubMed

    Eftekhar, Behzad

    2017-03-01

    Background During aneurysm surgery, neurosurgeons may need to look at the cerebral angiograms again to better orient themselves to the aneurysm and also the surrounding vascular anatomy. Simplification of the intraoperative imaging review and reduction of the time interval between the view under the microscope and the angiogram review can theoretically improve orientation. Objective To describe the use of a smartphone as a remote touchpad to simplify intraoperative visualization of three-dimensional (3D) cerebral angiograms and reduce the time interval between the view under the microscope and the angiogram review. Methods Anonymized 3D angiograms of the patients in Virtual Reality Modelling Language format are securely uploaded to sketchfab.com, accessible through smartphone Web browsers. A simple software has been developed and made available to facilitate the workflow. The smartphone is connected wirelessly to an external monitor using a Chromecast device and is used intraoperatively as a remote touchpad to view/rotate/zoom the 3D aneurysms angiograms on the external monitor. Results Implementation of the method is practical and helpful for the surgeon in certain cases. It also helps the operating staff, registrars, and students to orient themselves to the surgical anatomy. I present 10 of the uploaded angiograms published online. Conclusion The concept and method of using the smartphone as a remote touchpad to improve intraoperative visualization of 3D cerebral angiograms is described. The implementation is practical, using easily available hardware and software, in most neurosurgical centers worldwide. The method and concept have potential for further development.

  5. Visualization of hepatic arteries with 3D ultrasound during intra-arterial therapies

    NASA Astrophysics Data System (ADS)

    Gérard, Maxime; Tang, An; Badoual, Anaïs.; Michaud, François; Bigot, Alexandre; Soulez, Gilles; Kadoury, Samuel

    2016-03-01

    Liver cancer represents the second most common cause of cancer-related mortality worldwide. The prognosis is poor with an overall mortality of 95%. Moreover, most hepatic tumors are unresectable due to their advanced stage at discovery or poor underlying liver function. Tumor embolization by intra-arterial approaches is the current standard of care for advanced cases of hepatocellular carcinoma. These therapies rely on the fact that the blood supply of primary hepatic tumors is predominantly arterial. Feedback on blood flow velocities in the hepatic arteries is crucial to ensure maximal treatment efficacy on the targeted masses. Based on these velocities, the intra-arterial injection rate is modulated for optimal infusion of the chemotherapeutic drugs into the tumorous tissue. While Doppler ultrasound is a well-documented technique for the assessment of blood flow, 3D visualization of vascular anatomy with ultrasound remains challenging. In this paper we present an image-guidance pipeline that enables the localization of the hepatic arterial branches within a 3D ultrasound image of the liver. A diagnostic Magnetic resonance angiography (MRA) is first processed to automatically segment the hepatic arteries. A non-rigid registration method is then applied on the portal phase of the MRA volume with a 3D ultrasound to enable the visualization of the 3D mesh of the hepatic arteries in the Doppler images. To evaluate the performance of the proposed workflow, we present initial results from porcine models and patient images.

  6. 3D visualization of the scoliotic spine: longitudinal studies, data acquisition, and radiation dosage constraints

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Adler, Roy L.; Margulies, Joseph Y.; Tresser, Charles P.; Wu, Chai W.

    1999-05-01

    Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.

  7. System for the Analysis and Visualization of Large 3D Anatomical Trees

    PubMed Central

    Yu, Kun-Chang; Ritman, Erik L.; Higgins, William E.

    2007-01-01

    Modern micro-CT and multi-detector helical CT scanners can produce high-resolution 3D digital images of various anatomical trees. The large size and complexity of these trees make it essentially impossible to define them interactively. Automatic approaches have been proposed for a few specific problems, but none of these approaches guarantee extracting geometrically accurate multi-generational tree structures. This paper proposes an interactive system for defining and visualizing large anatomical trees and for subsequent quantitative data mining. The system consists of a large number of tools for automatic image analysis, semi-automatic and interactive tree editing, and an assortment of visualization tools. Results are presented for a variety of 3D high-resolution images. PMID:17669390

  8. Suitability of online 3D visualization technique in oil palm plantation management

    NASA Astrophysics Data System (ADS)

    Mat, Ruzinoor Che; Nordin, Norani; Zulkifli, Abdul Nasir; Yusof, Shahrul Azmi Mohd

    2016-08-01

    Oil palm industry has been the backbone for the growth of Malaysia economy. The exports of this commodity increasing almost every year. Therefore, there are many studies focusing on how to help this industry increased its productivity. In order to increase the productivity, the management of oil palm plantation need to be improved and strengthen. One of the solution in helping the oil palm manager is by implementing online 3D visualization technique for oil palm plantation using game engine technology. The potential of this application is that it can helps in fertilizer and irrigation management. For this reason, the aim of this paper is to investigate the issues in managing oil palm plantation from the view of oil palm manager by interview. The results from this interview will helps in identifying the suitable issues could be highlight in implementing online 3D visualization technique for oil palm plantation management.

  9. 3D visualization environment for analysis of telehealth indicators in public health.

    PubMed

    Filho, Amadeu S Campos; Novaes, Magdala A; Gomes, Alex S

    2013-01-01

    With the growth of telehealth applications and the need for public health managers to have tools that facilitate visualization of indicators produced by telehealth services arose the need to have simple systems to better planning the interventions. Furthermore, Health systems are considers difficult in order to visualize the right information by many health professionals [1] because of the complexity of its Graphical User Interface (GUI) and the high cognitive load needed to handle it. To overcome this problem, we have proposed a 3D environment for the analysis of telehealth indicators in public health by managers of public health sites. Users who will use the environment are part of public health manager of family health sites that participate of Network of Telehealth Centers of Pernambuco (RedeNUTES) [2] that is part of Brazil telehealth program. This paper aims to present a 3D environment for analysis of telehealth indicators by public health manager.

  10. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    obturateur ACL, coûte environ 2 000 $. Le présent projet vise à étudier la technologie d’affichage et les techniques de visualisation de données 3D...consommation, combinés à la concurrence, continueront à faire baisser les prix et améliorer les performances de la technologie 3D. Le présent projet vise à...travail, les technologies sont classées comme suit : les lunettes à obturateur ACL sont les meilleures, elles sont suivies par les stéréogrammes à

  11. Image processing and 3D visualization in the interpretation of patterned injury of the skin

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1995-09-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.

  12. Geospatial Data Processing for 3d City Model Generation, Management and Visualization

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Nocerino, E.; Remondino, F.; Revolti, A.; Soria, G.; Piffer, S.

    2017-05-01

    Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in "smart city" applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above - http://seneca.fbk.eu). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.

  13. Integration of Robotics and 3D Visualization to Modernize the Expeditionary Warfare Demonstrator (EWD)

    DTIC Science & Technology

    2009-09-01

    continually provided explanation and guidance on the tactical use of 3D holograms to help me complete this work. Thank you to Dennis Lenahan for his...continual support. Dennis spent many hours compiling historical data on the facility and his inputs are seen throughout this thesis. Dennis is truly...benefits for tactical visualization. Finally, methods to integrate holography into the EWD are recommended. B. A BRIEF HISTORY OF HOLOGRAPHY Dennis

  14. Optimization of site characterization and remediation methods using 3-D geoscience modeling and visualization techniques

    SciTech Connect

    Hedegaard, R.F.; Ho, J.; Eisert, J.

    1996-12-31

    Three-dimensional (3-D) geoscience volume modeling can be used to improve the efficiency of the environmental investigation and remediation process. At several unsaturated zone spill sites at two Superfund (CERCLA) sites (Military Installations) in California, all aspects of subsurface contamination have been characterized using an integrated computerized approach. With the aide of software such as LYNX GMS{trademark}, Wavefront`s Data Visualizer{trademark} and Gstools (public domain), the authors have created a central platform from which to map a contaminant plume, visualize the same plume three-dimensionally, and calculate volumes of contaminated soil or groundwater above important health risk thresholds. The developed methodology allows rapid data inspection for decisions such that the characterization process and remedial action design are optimized. By using the 3-D geoscience modeling and visualization techniques, the technical staff are able to evaluate the completeness and spatial variability of the data and conduct 3-D geostatistical predictions of contaminant and lithologic distributions. The geometry of each plume is estimated using 3-D variography on raw analyte values and indicator thresholds for the kriged model. Three-dimensional lithologic interpretation is based on either {open_quote}linked{close_quote} parallel cross sections or on kriged grid estimations derived from borehole data coded with permeability indicator thresholds. Investigative borings, as well as soil vapor extraction/injection wells, are sighted and excavation costs are estimated using these results. The principal advantages of the technique are the efficiency and rapidity with which meaningful results are obtained and the enhanced visualization capability which is a desirable medium to communicate with both the technical staff as well as nontechnical audiences.

  15. Real-Time Modeling and 3D Visualization of Source Dynamics and Connectivity Using Wearable EEG

    PubMed Central

    Mullen, Tim; Kothe, Christian; Chi, Yu Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This report summarizes our recent efforts to deliver real-time data extraction, preprocessing, artifact rejection, source reconstruction, multivariate dynamical system analysis (including spectral Granger causality) and 3D visualization as well as classification within the open-source SIFT and BCILAB toolboxes. We report the application of such a pipeline to simulated data and real EEG data obtained from a novel wearable high-density (64-channel) dry EEG system. PMID:24110155

  16. Hierarchical storage and visualization of real-time 3D data

    NASA Astrophysics Data System (ADS)

    Parry, Mitchell; Hannigan, Brendan; Ribarsky, William; Shaw, Christopher D.; Faust, Nickolas L.

    2001-08-01

    In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time

  17. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  18. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  19. Design of a parallel VLSI engine for real-time visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Bentum, Mark J.; Smit, Jaap

    1994-05-01

    Three dimensional medical scanners are widely available in today's hospitals to acquire a dataset of the human body without the need for surgery. The usefulness of this diagnostic information is limited by the lack of techniques to visualize the datasets. With the increasing computer power of today's workstations it is possible to make a transparent view of the 3D dataset. An interactive mode is necessary, however, to fully explore the 3D dataset. If both a high resolution and a high interactive speed is required, the necessary computational power is enormous. Therefore it is necessary to map the algorithms for volume visualization in a rather specific way onto (dedicated) chips to overcome the performance gap. This paper discusses a high-performance special-purpose low-power system, the Real-Time Volume Rendering Engine (RT-VRE), capable of rendering a 3D dataset of 2563 voxels onto a display of 7502 pixels with an interaction rate of 25 images per second. The RT-VRE allows biomedical engineers to interactively visualize and investigate their data.

  20. The Effect of 3D Visual Simulator on Children’s Visual Acuity - A Pilot Study Comparing Two Different Modalities

    PubMed Central

    Ide, Takeshi; Ishikawa, Mariko; Tsubota, Kazuo; Miyao, Masaru

    2013-01-01

    Purpose : To evaluate the efficacy of two non-surgical interventions of vision improvement in children. Methods : A prospective, randomized, pilot study to compare fogging method and the use of head mounted 3D display. Subjects were children, between 5 to 15 years old, with normal best corrected visual acuity (BCVA) and up to -3D myopia. Subjects played a video game as near point work, and received one of the two methods of treatments. Measurements of uncorrected far visual acuity (UCVA), refraction with autorefractometer, and subjective accommodative amplitude were taken 3 times, at the baseline, after the near work, and after the treatment. Results : Both methods applied after near work, improved UCVA. Head mounted 3D display group showed significant improvement in UCVA and resulted in better UCVA than baseline. Fogging group showed improvement in subjective accommodative amplitude. While 3D display group did not show change in the refraction, fogging group’s myopic refraction showed significant increase indicating the eyes showed myopic change of eyes after near work and treatment. Discussion : Despite our lack of clear knowledge in the mechanisms, both methods improved UCVA after the treatments. The improvement in UCVA was not correlated to measured refraction values. Conclusion : UCVA after near work can be improved by repeating near and distant accommodation by fogging and 3D image viewing, although at the different degrees. Further investigation on mechanisms of improvements and their clinical significance are warranted. PMID:24222810

  1. UCVM: An Open Source Software Package for Querying and Visualizing 3D Velocity Models

    NASA Astrophysics Data System (ADS)

    Gill, D.; Small, P.; Maechling, P. J.; Jordan, T. H.; Shaw, J. H.; Plesch, A.; Chen, P.; Lee, E. J.; Taborda, R.; Olsen, K. B.; Callaghan, S.

    2015-12-01

    Three-dimensional (3D) seismic velocity models provide foundational data for ground motion simulations that calculate the propagation of earthquake waves through the Earth. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) package for both Linux and OS X. This unique framework provides a cohesive way for querying and visualizing 3D models. UCVM v14.3.0, supports many Southern California velocity models including CVM-S4, CVM-H 11.9.1, and CVM-S4.26. The last model was derived from 26 full-3D tomographic iterations on CVM-S4. Recently, UCVM has been used to deliver a prototype of a new 3D model of central California (CCA) also based on full-3D tomographic inversions. UCVM was used to provide initial plots of this model and will be used to deliver CCA to users when the model is publicly released. Visualizing models is also possible with UCVM. Integrated within the platform are plotting utilities that can generate 2D cross-sections, horizontal slices, and basin depth maps. UCVM can also export models in NetCDF format for easy import into IDV and ParaView. UCVM has also been prototyped to export models that are compatible with IRIS' new Earth Model Collaboration (EMC) visualization utility. This capability allows for user-specified horizontal slices and cross-sections to be plotted in the same 3D Earth space. UCVM was designed to help a wide variety of researchers. It is currently being use to generate velocity meshes for many SCEC wave propagation codes, including AWP-ODC-SGT and Hercules. It is also used to provide the initial input to SCEC's CyberShake platform. For those interested in specific data points, the software framework makes it easy to extract P and S wave propagation speeds and other material properties from 3D velocity models by providing a common interface through which researchers can query earth models for a given location and depth. Also included in the last release was the ability to add small

  2. Study of the global corona evolution from the minimum to maximum of solar cycle 24 using 3D coronal electron density reconstructions with STEREO/COR1

    NASA Astrophysics Data System (ADS)

    Wang, Tongjiang; Reginald, Nelson Leslie; Davila, Joseph; St. Cyr, Orville Chris; Thompson, William T.

    2017-08-01

    This study aims at understanding the global corona evolution of the coronal activity during Solar Cycle 24 on both long-term and short-term time scales. By using a spherically symmetric polynomial approximation (SSPA) method described and validated in Wang and Davila (2014), the 3D coronal electron density in the height range of 1.5 to 3.7 Rsun is reconstructed based on STEREO/COR1-A and -B pB data. The reconstructions span a period from the Cycle 23/24 minimum to the Cycle 24 maximum, covering Carrington rotations (CRs) 2054-2153, for a total of 100 rotations. These 3D electron density distributions are validated by comparing with similar density models derived using other methods such as tomography and a MHD model as well as using data from SOHO/LASCO-C2. Uncertainties in the density reconstruction and estimated total coronal mass are analyzed. The cycle minimum-to-maximum modulation factors (MFs) of the coronal average electron density (or the total coronal mass) at different latitudinal ranges are quantified. Wavelet analysis of the cycle-long detrended density data reveals the existence of quasi-periodic short-term (7-8 months) variations during the rising and maximum activity phases. For the total mass of streamers the MFs depend on the changes in both their total area and average density, but the short-term oscillations are mainly caused by the streamer density fluctuations. A clear asymmetry is observed in the temporal evolution of the northern and southern hemispheres, with the former leading the latter by a lapse of 7-9 months, with a mild dependence on the latitude range.

  3. Deblocking of mobile stereo video

    NASA Astrophysics Data System (ADS)

    Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen

    2012-02-01

    Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.

  4. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  5. Interactive Visualization of 3-D Mantle Convection Extended Through AJAX Applications

    NASA Astrophysics Data System (ADS)

    McLane, J. C.; Czech, W.; Yuen, D.; Greensky, J.; Knox, M. R.

    2008-12-01

    We have designed a new software system for real-time interactive visualization of results taken directly from large-scale simulations of 3-D mantle convection and other large-scale simulations. This approach allows for intense visualization sessions for a couple of hours as opposed to storing massive amounts of data in a storage system. Our data sets consist of 3-D data for volume rendering with over 10 million unknowns at each timestep. Large scale visualization on a display wall holding around 13 million pixels has already been accomplished with extension to hand-held devices, such as the OQO and Nokia N800 and recently the iPHONE. We are developing web-based software in Java to extend the use of this system across long distances. The software is aimed at creating an interactive and functional application capable of running on multiple browsers by taking advantage of two AJAX-enabled web frameworks: Echo2 and Google Web Toolkit. The software runs in two modes allowing for a user to control an interactive session or observe a session controlled by another user. Modular build of the system allows for components to be swapped out for new components so that other forms of visualization could be accommodated such as Molecular Dynamics in mineral physics or 2-D data sets from lithospheric regional models.

  6. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  7. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  8. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  9. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome.

    PubMed

    Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten

    2014-01-01

    The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions.

  10. Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.

    2016-06-01

    Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  11. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Open source 3D visualization and interaction dedicated to hydrological models

    NASA Astrophysics Data System (ADS)

    Richard, Julien; Giangola-Murzyn, Agathe; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2014-05-01

    Climate change and surface urbanization strongly modify the hydrological cycle in urban areas, increasing the consequences of extreme events such as floods or draughts. These issues lead to the development of the Multi-Hydro model at the Ecole des Ponts ParisTech (A. Giangola-Murzyn et al., 2012). This fully distributed model allows to compute the hydrological response of urban and peri-urban areas. Unfortunately such models are seldom user friendly. Indeed generating the inputs before launching a new simulation is usually a tricky tasks, and understanding and interpreting the outputs remains specialist tasks not accessible to the wider public. The MH-AssimTool was developed to overcome these issues. To enable an easier and improved understanding of the model outputs, we decided to convert the raw output data (grids file in ascii format) to a 3D display. Some commercial paying models provide a 3D visualization. Because of the cost of their licenses, this kind of tools may not be accessible to the most concerned stakeholders. So, we are developing a new tool based on C++ for the computation, Qt for the graphic user interface, QGIS for the geographical side and OpenGL for the 3D display. All these languages and libraries are open source and multi-platform. We will discuss some preprocessing issues for the data conversion from 2.5D to 3D. Indeed, the GIS data, is considered as a 2.5D (e.i. 2D polygon + one height) and the its transform to 3D display implies a lot of algorithms. For example,to visualize in 3D one building, it is needed to have for each point the coordinates and the elevation according to the topography. Furthermore one have to create new points to represent the walls. Finally the interactions between the model and stakeholders through this new interface and how this helps converting a research tool into a an efficient operational decision tool will be discussed. This ongoing research on the improvement of the visualization methods is supported by the

  13. Localizing Protein in 3D Neural Stem Cell Culture: a Hybrid Visualization Methodology

    PubMed Central

    Fai, Stephen; Bennett, Steffany A.L.

    2010-01-01

    The importance of 3-dimensional (3D) topography in influencing neural stem and progenitor cell (NPC) phenotype is widely acknowledged yet challenging to study. When dissociated from embryonic or post-natal brain, single NPCs will proliferate in suspension to form neurospheres. Daughter cells within these cultures spontaneously adopt distinct developmental lineages (neurons, oligodendrocytes, and astrocytes) over the course of expansion despite being exposed to the same extracellular milieu. This progression recapitulates many of the stages observed over the course of neurogenesis and gliogenesis in post-natal brain and is often used to study basic NPC biology within a controlled environment. Assessing the full impact of 3D topography and cellular positioning within these cultures on NPC fate is, however, difficult. To localize target proteins and identify NPC lineages by immunocytochemistry, free-floating neurospheres must be plated on a substrate or serially sectioned. This processing is required to ensure equivalent cell permeabilization and antibody access throughout the sphere. As a result, 2D epifluorescent images of cryosections or confocal reconstructions of 3D Z-stacks can only provide spatial information about cell position within discrete physical or digital 3D slices and do not visualize cellular position in the intact sphere. Here, to reiterate the topography of the neurosphere culture and permit spatial analysis of protein expression throughout the entire culture, we present a protocol for isolation, expansion, and serial sectioning of post-natal hippocampal neurospheres suitable for epifluorescent or confocal immunodetection of target proteins. Connexin29 (Cx29) is analyzed as an example. Next, using a hybrid of graphic editing and 3D modelling softwares rigorously applied to maintain biological detail, we describe how to re-assemble the 3D structural positioning of these images and digitally map labelled cells within the complete neurosphere. This

  14. Multispectral photon counting integral imaging system for color visualization of photon limited 3D scenes

    NASA Astrophysics Data System (ADS)

    Moon, Inkyu

    2014-06-01

    This paper provides an overview of a colorful photon-counting integral imaging system using Bayer elemental images for 3D visualization of photon limited scenes. The color image sensor with a format of Bayer color filter array, i.e., a red, a green, or a blue filter in a repeating pattern, captures elemental image set of a photon limited three-dimensional (3D) scene. It is assumed that the observed photon count in each channel (red, green or blue) follows Poisson statistics. The reconstruction of 3D scene with a format of Bayer is obtained by applying computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator to the photon-limited Bayer elemental images. Finally, several standard demosaicing algorithms are applied in order to convert the 3D reconstruction with a Bayer format into a RGB per pixel format. Experimental results demonstrate that the gradient corrected linear interpolation technique achieves better performance in regard with acceptable PSNR and less computational complexity.

  15. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  16. Disentangling the intragroup HI in Compact Groups of galaxies by means of X3D visualization

    NASA Astrophysics Data System (ADS)

    Verdes-Montenegro, Lourdes; Vogt, Frederic; Aubery, Claire; Duret, Laetitie; Garrido, Julián; Sánchez, Susana; Yun, Min S.; Borthakur, Sanchayeeta; Hess, Kelley; Cluver, Michelle; Del Olmo, Ascensión; Perea, Jaime

    2017-03-01

    As an extreme kind of environment, Hickson Compact groups (HCGs) have shown to be very complex systems. HI-VLA observations revealed an intrincated network of HI tails and bridges, tracing pre-processing through extreme tidal interactions. We found HCGs to show a large HI deficiency supporting an evolutionary sequence where gas-rich groups transform via tidal interactions and ISM (interstellar medium) stripping into gas-poor systems. We detected as well a diffuse HI component in the groups, increasing with evolutionary phase, although with uncertain distribution. The complex net of detected HI as observed with the VLA seems hence so puzzling as the missing one. In this talk we revisit the existing VLA information on the HI distribution and kinematics of HCGs by means of X3D visualization. X3D constitutes a powerful tool to extract the most from HI data cubes and a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3-D) diagrams.

  17. XML-based 3D model visualization and simulation framework for dynamic models

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Fishwick, Paul A.

    2002-07-01

    Relatively recent advances in computer technology enable us to create three-dimensional (3D) dynamic models and simulate them within a 3D web environment. The use of such models is especially valuable when teaching simulation, and the concepts behind dynamic models, since the models are made more accessible to the students. Students tend to enjoy a construction process in which they are able to employ their own cultural and aesthetic forms. The challenge is to create a language that allows for a grammar for modeling, while simultaneously permitting arbitrary presentation styles. For further flexibility, we need an effective way to represent and simulate dynamic models that can be shared by modelers over the Internet. We present an Extensible Markup Language (XML)-based framework that will guide a modeler in creating personalized 3D models, visualizing its dynamic behaviors, and simulating the created models. A model author will use XML files to represent geometries and topology of a dynamic model. Model Fusion Engine, written in Extensible Stylesheet Language Transformation (XSLT), expedites the modeling process by automating the creation of dynamic models with the user-defined XML files. Modelers can also link simulation programs with a created model to analyze the characteristics of the model. The advantages of this system lie in the education of modeling and simulating dynamic models, and in the exploitation of visualizing the dynamic model behaviors.

  18. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.

    2016-10-01

    The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  19. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing.

    PubMed

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is ±0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  1. Magnetic assembly of 3D cell clusters: visualizing the formation of an engineered tissue.

    PubMed

    Ghosh, S; Kumar, S R P; Puri, I K; Elankumaran, S

    2016-02-01

    Contactless magnetic assembly of cells into 3D clusters has been proposed as a novel means for 3D tissue culture that eliminates the need for artificial scaffolds. However, thus far its efficacy has only been studied by comparing expression levels of generic proteins. Here, it has been evaluated by visualizing the evolution of cell clusters assembled by magnetic forces, to examine their resemblance to in vivo tissues. Cells were labeled with magnetic nanoparticles, then assembled into 3D clusters using magnetic force. Scanning electron microscopy was used to image intercellular interactions and morphological features of the clusters. When cells were held together by magnetic forces for a single day, they formed intercellular contacts through extracellular fibers. These kept the clusters intact once the magnetic forces were removed, thus serving the primary function of scaffolds. The cells self-organized into constructs consistent with the corresponding tissues in vivo. Epithelial cells formed sheets while fibroblasts formed spheroids and exhibited position-dependent morphological heterogeneity. Cells on the periphery of a cluster were flattened while those within were spheroidal, a well-known characteristic of connective tissues in vivo. Cells assembled by magnetic forces presented visual features representative of their in vivo states but largely absent in monolayers. This established the efficacy of contactless assembly as a means to fabricate in vitro tissue models. © 2016 John Wiley & Sons Ltd.

  2. Real time 3D visualization of intraoperative organ deformations using structured dictionary.

    PubMed

    Wang, Dan; Tewfik, Ahmed H

    2012-04-01

    Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.

  3. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  4. The Universe in 3D! Visualizing Astronomy at Low Cost for Education and Public Outreach

    NASA Astrophysics Data System (ADS)

    Mullan, Brendan L.

    2010-01-01

    We have developed a novel series of education and public outreach programs using stunning 3D visualizations of planets, stars, galaxies, and the universe on the grandest scales. By combining inexpensive dual-projection hardware built to Geowall Consortium specifications, publicly available astronomical images and software, and experience as professional astronomers, we have generated a suite of entertaining and informative 3D shows that are highly acclaimed by all ages. Our programs range from detailed sojourns on the surface of Mars as seen by Spirit and Opportunity, to a multiwavelength ride through the Milky Way Galaxy and its varied astrophysical inhabitants, to a customized exploration of the sordid galactic cannibalism and elegant large-scale structure of the extragalactic Universe.

  5. 3D visualization as a communicative aid in pharmaceutical advice-giving over distance.

    PubMed

    Ostlund, Martin; Dahlbäck, Nils; Petersson, Göran Ingemar

    2011-07-18

    Medication misuse results in considerable problems for both patient and society. It is a complex problem with many contributing factors, including timely access to product information. To investigate the value of 3-dimensional (3D) visualization paired with video conferencing as a tool for pharmaceutical advice over distance in terms of accessibility and ease of use for the advice seeker. We created a Web-based communication service called AssistancePlus that allows an advisor to demonstrate the physical handling of a complex pharmaceutical product to an advice seeker with the aid of 3D visualization and audio/video conferencing. AssistancePlus was tested in 2 separate user studies performed in a usability lab, under realistic settings and emulating a real usage situation. In the first study, 10 pharmacy students were assisted by 2 advisors from the Swedish National Co-operation of Pharmacies' call centre on the use of an asthma inhaler. The student-advisor interview sessions were filmed on video to qualitatively explore their experience of giving and receiving advice with the aid of 3D visualization. In the second study, 3 advisors from the same call centre instructed 23 participants recruited from the general public on the use of 2 products: (1) an insulin injection pen, and (2) a growth hormone injection syringe. First, participants received advice on one product in an audio-recorded telephone call and for the other product in a video-recorded AssistancePlus session (product order balanced). In conjunction with the AssistancePlus session, participants answered a questionnaire regarding accessibility, perceived expressiveness, and general usefulness of 3D visualization for advice-giving over distance compared with the telephone and were given a short interview focusing on their experience of the 3D features. In both studies, participants found the AssistancePlus service helpful in providing clear and exact instructions. In the second study, directly comparing

  6. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  7. A streaming-based solution for remote visualization of 3D graphics on mobile devices.

    PubMed

    Lamberti, Fabrizio; Sanna, Andrea

    2007-01-01

    Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.

  8. PointCloudXplore: a visualization tool for 3D gene expressiondata

    SciTech Connect

    Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2006-10-01

    The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.

  9. A simple, fast, and repeatable survey method for underwater visual 3D benthic mapping and monitoring.

    PubMed

    Pizarro, Oscar; Friedman, Ariell; Bryson, Mitch; Williams, Stefan B; Madin, Joshua

    2017-03-01

    Visual 3D reconstruction techniques provide rich ecological and habitat structural information from underwater imagery. However, an unaided swimmer or diver struggles to navigate precisely over larger extents with consistent image overlap needed for visual reconstruction. While underwater robots have demonstrated systematic coverage of areas much larger than the footprint of a single image, access to suitable robotic systems is limited and requires specialized operators. Furthermore, robots are poor at navigating hydrodynamic habitats such as shallow coral reefs. We present a simple approach that constrains the motion of a swimmer using a line unwinding from a fixed central drum. The resulting motion is the involute of a circle, a spiral-like path with constant spacing between revolutions. We test this survey method at a broad range of habitats and hydrodynamic conditions encircling Lizard Island in the Great Barrier Reef, Australia. The approach generates fast, structured, repeatable, and large-extent surveys (~110 m(2) in 15 min) that can be performed with two people and are superior to the commonly used "mow the lawn" method. The amount of image overlap is a design parameter, allowing for surveys that can then be reliably used in an automated processing pipeline to generate 3D reconstructions, orthographically projected mosaics, and structural complexity indices. The individual images or full mosaics can also be labeled for benthic diversity and cover estimates. The survey method we present can serve as a standard approach to repeatedly collecting underwater imagery for high-resolution 2D mosaics and 3D reconstructions covering spatial extents much larger than a single image footprint without requiring sophisticated robotic systems or lengthy deployment of visual guides. As such, it opens up cost-effective novel observations to inform studies relating habitat structure to ecological processes and biodiversity at scales and spatial resolutions not readily

  10. TractRender: a new generalized 3D medical image visualization and output platform

    NASA Astrophysics Data System (ADS)

    Hwang, Darryl H.; Tsao, Sinchai; Gajawelli, Niharika; Law, Meng; Lepore, Natasha

    2015-01-01

    Diffusion MRI allows us not only voxelized diffusion characteristics but also the potential to delineate neuronal fiber path through tractography. There is a dearth of flexible open source tractography software programs for visualizing these complicated 3D structures. Moreover, rendering these structures using various shading, lighting, and representations will result in vastly different graphical feel. In addition, the ability to output these objects in various formats increases the utility of this platform. We have created TractRender that leverages openGL features through Matlab, allowing for maximum ease of use but still maintain the flexibility of custom scene rendering.

  11. Stereo Vision: The Haves and Have-Nots.

    PubMed

    Hess, Robert F; To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R

    2015-06-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible.

  12. Visualization of high-density 3D graphs using nonlinear visual space transformations

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Garg, Pankaj; Machiraju, Vijay

    2002-03-01

    The real world data distribution is seldom uniform. Clutter and sparsity commonly occur in visualization. Often, clutter results in overplotting, in which certain data items are not visible because other data items occlude them. Sparsity results in the inefficient use of the available display space. Common mechanisms to overcome this include reducing the amount of information displayed or using multiple representations with a varying amount of detail. This paper describes out experiments on Non-Linear Visual Space Transformations (NLVST). NLVST encompasses several innovative techniques: (1) employing a histogram for calculating the density of data distribution; (2) mapping the raw data values to a non-linear scale for stretching a high-density area; (3) tightening the sparse area to save the display space; (4) employing different color ranges of values on a non-linear scale according to the local density. We have applied NLVST to several web applications: market basket analysis, transactions observation, and IT search behavior analysis.

  13. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  14. Fast 3D visualization of endogenous brain signals with high-sensitivity laser scanning photothermal microscopy

    PubMed Central

    Miyazaki, Jun; Iida, Tadatsune; Tanaka, Shinji; Hayashi-Takagi, Akiko; Kasai, Haruo; Okabe, Shigeo; Kobayashi, Takayoshi

    2016-01-01

    A fast, high-sensitivity photothermal microscope was developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope. We confirmed a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrated simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 μs. The fluorescence image visualized neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. PMID:27231615

  15. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  16. Scientific rotoscoping: a morphology-based method of 3-D motion analysis and visualization.

    PubMed

    Gatesy, Stephen M; Baier, David B; Jenkins, Farish A; Dial, Kenneth P

    2010-06-01

    Three-dimensional skeletal movement is often impossible to accurately quantify from external markers. X-ray imaging more directly visualizes moving bones, but extracting 3-D kinematic data is notoriously difficult from a single perspective. Stereophotogrammetry is extremely powerful if bi-planar fluoroscopy is available, yet implantation of three radio-opaque markers in each segment of interest may be impractical. Herein we introduce scientific rotoscoping (SR), a new method of motion analysis that uses articulated bone models to simultaneously animate and quantify moving skeletons without markers. The three-step process is described using examples from our work on pigeon flight and alligator walking. First, the experimental scene is reconstructed in 3-D using commercial animation software so that frames of undistorted fluoroscopic and standard video can be viewed in their correct spatial context through calibrated virtual cameras. Second, polygonal models of relevant bones are created from CT or laser scans and rearticulated into a hierarchical marionette controlled by virtual joints. Third, the marionette is registered to video images by adjusting each of its degrees of freedom over a sequence of frames. SR outputs high-resolution 3-D kinematic data for multiple, unmarked bones and anatomically accurate animations that can be rendered from any perspective. Rather than generating moving stick figures abstracted from the coordinates of independent surface points, SR is a morphology-based method of motion analysis deeply rooted in osteological and arthrological data.

  17. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    PubMed

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  18. Web-based interactive 3D visualization as a tool for improved anatomy learning.

    PubMed

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain from its use in reaching their anatomical learning objectives. Several 3D vascular VR models were created using an interactive segmentation tool based on the "virtual contrast injection" method. This method allows users, with relative ease, to convert computer tomography or magnetic resonance images into vivid 3D VR movies using the OsiriX software equipped with the CMIV CTA plug-in. Once created using the segmentation tool, the image series were exported in Quick Time Virtual Reality (QTVR) format and integrated within a web framework of the Educational Virtual Anatomy (EVA) program. A total of nine QTVR movies were produced encompassing most of the major arteries of the body. These movies were supplemented with associated information, color keys, and notes. The results indicate that, in general, students' attitudes towards the EVA-program were positive when compared with anatomy textbooks, but results were not the same with dissections. Additionally, knowledge tests suggest a potentially beneficial effect on learning.

  19. Quantitative visualization of synchronized insulin secretion from 3D-cultured cells.

    PubMed

    Suzuki, Takahiro; Kanamori, Takao; Inouye, Satoshi

    2017-05-13

    Quantitative visualization of synchronized insulin secretion was performed in an isolated rat pancreatic islet and a spheroid of rat pancreatic beta cell line using a method of video-rate bioluminescence imaging. Video-rate images of insulin secretion from 3D-cultured cells were obtained by expressing the fusion protein of insulin and Gaussia luciferase (Insulin-GLase). A subclonal rat INS-1E cell line stably expressing Insulin-GLase, named iGL, was established and a cluster of iGL cells showed oscillatory insulin secretion that was completely synchronized in response to high glucose. Furthermore, we demonstrated the effect of an antidiabetic drug, glibenclamide, on synchronized insulin secretion from 2D- and 3D-cultured iGL cells. The amount of secreted Insulin-GLase from iGL cells was also determined by a luminometer. Thus, our bioluminescence imaging method could generally be used for investigating protein secretion from living 3D-cultured cells. In addition, iGL cell line would be valuable for evaluating antidiabetic drugs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. In vivo 3D visualization of peripheral circulatory system using linear optoacoustic array

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Brecht, Hans-Peter; Fronheiser, Matthew P.; Nadvoretsky, Vyacheslav; Su, Richard; Conjusteau, Andre; Oraevsky, Alexander A.

    2010-02-01

    In this work we modified light illumination of the laser optoacoustic (OA) imaging system to improve the 3D visualization of human forearm vasculature. The computer modeling demonstrated that the new illumination design that features laser beams converging on the surface of the skin in the imaging plane of the probe provides superior OA images in comparison to the images generated by the illumination with parallel laser beams. We also developed the procedure for vein/artery differentiation based on OA imaging with 690 nm and 1080 nm laser wavelengths. The procedure includes statistical analysis of the intensities of OA images of the neighboring blood vessels. Analysis of the OA images generated by computer simulation of a human forearm illuminated at 690 nm and 1080 nm resulted in successful differentiation of veins and