Science.gov

Sample records for 3d visualization environment

  1. Visualizing realistic 3D urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Aaron; Chen, Tuolin; Brunig, Michael; Schmidt, Hauke

    2003-05-01

    Visualizing complex urban environments has been an active research topic due to its wide variety of applications in city planning: road construction, emergency facilities planning, and optimal placement of wireless carrier base stations. Traditional 2D visualizations have been around for a long time but they only provide a schematic line-drawing bird's eye view and are sometimes confusing to understand due to the lack of depth information. Early versions of 3D systems have been developed for very expensive graphics workstations which seriously limited the availability. In this paper we describe a 3D visualization system for a desktop PC which integrates multiple resolutions of data and provides a realistic view of the urban environment.

  2. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  3. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  4. Visualizing the process of interaction in a 3D environment

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  5. Modelling honeybee visual guidance in a 3-D environment.

    PubMed

    Portelli, G; Serres, J; Ruffier, F; Franceschini, N

    2010-01-01

    In view of the behavioral findings published on bees during the last two decades, it was proposed to decipher the principles underlying bees' autopilot system, focusing in particular on these insects' use of the optic flow (OF). Based on computer-simulated experiments, we developed a vision-based autopilot that enables a "simulated bee" to travel along a tunnel, controlling both its speed and its clearance from the right wall, left wall, ground, and roof. The flying agent thus equipped enjoys three translational degrees of freedom on the surge (x), sway (y), and heave (z) axes, which are uncoupled. This visuo-motor control system, which is called ALIS (AutopiLot using an Insect based vision System), is a dual OF regulator consisting of two interdependent feedback loops, each of which has its own OF set-point. The experiments presented here showed that the simulated bee was able to navigate safely along a straight or tapered tunnel and to react appropriately to any untoward OF perturbations, such as those resulting from the occasional lack of texture on one wall or the tapering of the tunnel. The minimalistic visual system used here (involving only eight pixels) suffices to jointly control both the clearance from the four walls and the forward speed, without having to measure any speeds or distances. The OF sensors and the simple visuo-motor control system we have developed account well for the results of ethological studies performed on honeybees flying freely along straight and tapered corridors.

  6. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  7. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  8. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  9. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  10. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  11. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  12. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  13. Use of 3D conformal symbology on HMD for a safer flight in degraded visual environment

    NASA Astrophysics Data System (ADS)

    Klein, Ofer; Doehler, Hans-Ullrich; Trousil, Thomas; Peleg-Marzan, Ruthy

    2012-06-01

    Since the entry of coalition forces to Afghanistan and Iraq, a steep rise at the rate of accidents has occurred as a result of flying and landing in Degraded Visual Environment (DVE) conditions. Such conditions exist in various areas around the world and include bad weather, dust and snow landing (Brownout and whiteout) and low illumination at dark nights. A promising solution is a novel 3D conformal symbology displayed on head-tracked helmet mounted display (HMD). The 3D conformal symbology approach provides space stabilized three-dimensional symbology presented on the pilot helmet mounted display and has the potential of presenting a step function in HMD performance. It offers an intuitive way for presenting crucial information to the pilots in order to increase Situational Awareness, lower the pilots' workload and thus enhancing safety of flight dramatically. The pilots can fly "heads out" while the necessary flight and mission information is presented in intuitive manner, conformal with the real world and in real-time. . Several Evaluation trials had been conducted in the UK, US and Israel using systems that were developed by Elbit Systems to prove the embodied potential of the system to provide a solution for DVE flight conditions: technology, concept and the specific systems.

  14. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  15. 3-D Localization of Virtual Sound Sources: Effects of Visual Environment, Pointing Method, and Training

    PubMed Central

    Majdak, Piotr; Goupell, Matthew J.; Laback, Bernhard

    2010-01-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE) (darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In experiment 2, subjects were provided sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies. PMID:20139459

  16. Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment

    SciTech Connect

    Bowman, Ian; Shalf, John; Ma, Kwan-Liu; Bethel, Wes

    2004-06-30

    The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such a distributed pipeline is becoming available, but few services support even marginally optimal resource selection and partitioning of the data analysis workflow. We explore a methodology for building a model of overall application performance using a composition of the analytic models of individual components that comprise the pipeline. The analytic models are shown to be accurate on a testbed of distributed heterogeneous systems. The prediction methodology will form the foundation of a more robust resource management service for future Grid-based visualization applications.

  17. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  18. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  19. Shifting Sands and Turning Tides: Using 3D Visualization Technology to Shape the Environment for Undergraduate Students

    NASA Astrophysics Data System (ADS)

    Jenkins, H. S.; Gant, R.; Hopkins, D.

    2014-12-01

    Teaching natural science in a technologically advancing world requires that our methods reach beyond the traditional computer interface. Innovative 3D visualization techniques and real-time augmented user interfaces enable students to create realistic environments to understand the world around them. Here, we present a series of laboratory activities that utilize an Augmented Reality Sandbox to teach basic concepts of hydrology, geology, and geography to undergraduates at Harvard University and the University of Redlands. The Augmented Reality (AR) Sandbox utilizes a real sandbox that is overlain by a digital projection of topography and a color elevation map. A Microsoft Kinect 3D camera feeds altimetry data into a software program that maps this information onto the sand surface using a digital projector. Students can then manipulate the sand and observe as the Sandbox augments their manipulations with projections of contour lines, an elevation color map, and a simulation of water. The idea for the AR Sandbox was conceived at MIT by the Tangible Media Group in 2002 and the simulation software used here was written and developed by Dr. Oliver Kreylos of the University of California - Davis as part of the NSF funded LakeViz3D project. Between 2013 and 2014, we installed AR Sandboxes at Harvard and the University of Redlands, respectively, and developed laboratory exercises to teach flooding hazard, erosion and watershed development in undergraduate earth and environmental science courses. In 2013, we introduced a series of AR Sandbox laboratories in Introductory Geology, Hydrology, and Natural Disasters courses. We found laboratories that utilized the AR Sandbox at both universities allowed students to become quickly immersed in the learning process, enabling a more intuitive understanding of the processes that govern the natural world. The physical interface of the AR Sandbox reduces barriers to learning, can be used to rapidly illustrate basic concepts of geology

  20. Techniques for interactive 3-D scientific visualization

    SciTech Connect

    Glinert, E.P. . Dept. of Computer Science); Blattner, M.M. Hospital and Tumor Inst., Houston, TX . Dept. of Biomathematics California Univ., Davis, CA . Dept. of Applied Science Lawrence Livermore National Lab., CA ); Becker, B.G. . Dept. of Applied Science Lawrence Livermore National La

    1990-09-24

    Interest in interactive 3-D graphics has exploded of late, fueled by (a) the allure of using scientific visualization to go where no-one has gone before'' and (b) by the development of new input devices which overcome some of the limitations imposed in the past by technology, yet which may be ill-suited to the kinds of interaction required by researchers active in scientific visualization. To resolve this tension, we propose a flat 5-D'' environment in which 2-D graphics are augmented by exploiting multiple human sensory modalities using cheap, conventional hardware readily available with personal computers and workstations. We discuss how interactions basic to 3-D scientific visualization, like searching a solution space and comparing two such spaces, are effectively carried out in our environment. Finally, we describe 3DMOVE, an experimental microworld we have implemented to test out some of our ideas. 40 refs., 4 figs.

  1. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  2. 3D Visualization of Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Schaefer, John A.

    2014-01-01

    Aerodynamicists and biologists have long recognized the benefits of formation flight. When birds or aircraft fly in the upwash region of the vortex generated by leaders in a formation, induced drag is reduced for the trail bird or aircraft, and efficiency improves. The major consequence of this is that fuel consumption can be greatly reduced. When two aircraft are separated by a large enough longitudinal distance, the aircraft are said to be flying in a cooperative trajectory. A simulation has been developed to model autonomous cooperative trajectories of aircraft; however it does not provide any 3D representation of the multi-body system dynamics. The topic of this research is the development of an accurate visualization of the multi-body system observable in a 3D environment. This visualization includes two aircraft (lead and trail), a landscape for a static reference, and simplified models of the vortex dynamics and trajectories at several locations between the aircraft.

  3. 3-D Flyover Visualization of Veil Nebula

    NASA Video Gallery

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  4. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  5. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  6. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  7. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  8. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  9. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  10. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  11. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.

  12. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  13. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  14. 3D scientific visualization of reservoir simulation post-processing

    SciTech Connect

    Sousa, M.C.; Miranda-Filho, D.N.

    1994-12-31

    This paper describes a 3D visualization software designed at PETROBRAS and TecGraf/PUC-RJ in Brazil for the analysis of reservoir engineering post-processing data. It offers an advanced functional environment on graphical workstations with intuitive and ergonomic interface. Applications to real reservoir models show the enriching features of the software.

  15. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  16. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  17. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  18. Enhanced visualization of angiograms using 3D models

    NASA Astrophysics Data System (ADS)

    Marovic, Branko S.; Duckwiler, Gary R.; Villablanca, Pablo; Valentino, Daniel J.

    1999-05-01

    The 3D visualization of intracranial vasculature can facilitate the planning of endovascular therapy and the evaluation of interventional result. To create 3D visualizations, volumetric datasets from x-ray computed tomography angiography (CTA) and magnetic resonance angiography (MRA) are commonly rendered using maximum intensity projection (MIP), volume rendering, or surface rendering techniques. However, small aneurysms and mild stenoses are very difficult to detect using these methods. Furthermore, the instruments used during endovascular embolization or surgical treatment produce artifacts that typically make post-intervention CTA inapplicable, and the presence of magnetic material prohibits the use of MRA. Therefore, standard digital angiography is typically used. In order to address these problems, we developed a visualization and modeling system that displays 2D and 3D angiographic images using a simple Web-based interface. Polygonal models of vasculature were generated from CT and MR data using 3D segmentation of bones and vessels and polygonal surface extraction and simplification. A web-based 3D environment was developed for interactive examination of reconstructed surface models, creation of oblique cross- sections and maximum intensity projections, and distance measurements and annotations. This environment uses a multi- tier client/server approach employing VRML and Java. The 3D surface model and angiographic images can be aligned and displayed simultaneously to permit better perception of complex vasculature and to determine optical viewing positions and angles before starting an angiographic sessions. Polygonal surface reconstruction allows interactive display of complex spatial structures on inexpensive platforms such as personal computers as well as graphic workstations. The aneurysm assessment procedure demonstrated the utility of web-based technology for clinical visualization. The resulting system facilitated the treatment of serious vascular

  19. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  20. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  1. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  2. Signal and Noise in 3D Environments

    DTIC Science & Technology

    2015-09-30

    algorithms. The 3D environment causes beam splitting, which affects both the perceived bearing and the received level on a towed array. 2 To date... sound level as dozens of ships crisscross an area. The variations in intensity should allow us to infer the range-integrated transmission loss

  3. 3-D Visualizations At (Almost) No Expense

    NASA Astrophysics Data System (ADS)

    Sedlock, R. L.

    2003-12-01

    Like most teaching-oriented public universities, San José State University (part of the California State University system) currently faces severe budgetary constraints. These circumstances prohibit the construction of one or more Geo-Walls on-campus. Nevertheless, the Department of Geology has pursued alternatives that enable our students to benefit from 3-D visualizations such as those used with the Geo-Wall. This experience - a sort of virtual virtuality - depends only on the availability of a computer lab and an optional plotter. Starting in June 2003, we have used the methods described here with two diverse groups of participants: middle- and high-school teachers taking professional development workshops through grants funded by NSF and NASA, and regular university students enrolled in introductory earth science and geology laboratory courses. We use two types of three-dimensional images with our students: visualizations from the on-line Gallery of Virtual Topography (Steve Reynolds), and USGS digital topographic quadrangles that have been transformed into anaglyph files for viewing with 3-D glasses. The procedure for transforming DEMs into these anaglyph files, developed by Paul Morin, is available at http://geosun.sjsu.edu/~sedlock/anaglyph.html. The resulting images can be used with students in one of two ways. First, maps can be printed on a suitable plotter, laminated (optional but preferable), and used repeatedly with different classes. Second, the images can be viewed in school computer labs or by students on their own computers. Chief advantages of the plotter option are (1) full-size maps (single or tiled) viewable in their entirety, and (2) dependability (independent of Internet connections and electrical power). Chief advantages of the computer option are (1) minimal preparation time and no other needed resources, assuming a computer lab with Internet access, and (2) students can work with the images outside of regularly scheduled courses. Both

  4. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  5. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  6. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  7. 3D visualization of port simulation.

    SciTech Connect

    Horsthemke, W. H.; Macal, C. M.; Nevins, M. R.

    1999-06-14

    Affordable and realistic three dimensional visualization technology can be applied to large scale constructive simulations such as the port simulation model, PORTSIM. These visualization tools enhance the experienced planner's ability to form mental models of how seaport operations will unfold when the simulation model is implemented and executed. They also offer unique opportunities to train new planners not only in the use of the simulation model but on the layout and design of seaports. Simulation visualization capabilities are enhanced by borrowing from work on interface design, camera control, and data presentation. Using selective fidelity, the designers of these visualization systems can reduce their time and efforts by concentrating on those features which yield the most value for their simulation. Offering the user various observational tools allows the freedom to simply watch or engage in the simulation without getting lost. Identifying the underlying infrastructure or cargo items with labels can provide useful information at the risk of some visual clutter. The PortVis visualization expands the PORTSIM user base which can benefit from the results provided by this capability, especially in strategic planning, mission rehearsal, and training. Strategic planners will immediately reap the benefits of seeing the impact of increased throughput visually without keeping track of statistical data. Mission rehearsal and training users will have an effective training tool to supplement their operational training exercises which are limited in number because of their high costs. Having another effective training modality in this visualization system allows more training to take place and more personnel to gain an understanding of seaport operations. This simulation and visualization training can be accomplished at lower cost than would be possible for the operational training exercises alone. The application of PORTSIM and PortVis will lead to more efficient

  8. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  9. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  10. 3-D visualization of geologic structures and processes

    NASA Astrophysics Data System (ADS)

    Pflug, R.; Klein, H.; Ramshorn, Ch.; Genter, M.; Stärk, A.

    Interactive 3-D computer graphics techniques are used to visualize geologic structures and simulated geologic processes. Geometric models that serve as input to 3-D viewing programs are generated from contour maps, from serial sections, or directly from simulation program output. Choice of viewing parameters strongly affects the perception of irregular surfaces. An interactive 3-D rendering program and its graphical user interface provide visualization tools for structural geology, seismic interpretation, and visual post-processing of simulations. Dynamic display of transient ground-water simulations and sedimentary process simulations can visualize processes developing through time.

  11. Visualization of 3D Geological Models on Google Earth

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Um, J.; Park, M.

    2013-05-01

    Google Earth combines satellite imagery, aerial photography, thematic maps and various data sets to make a three-dimensional (3D) interactive image of the world. Currently, Google Earth is a popular visualization tool in a variety of fields and plays an increasingly important role not only for private users in daily life, but also for scientists, practitioners, policymakers and stakeholders in research and application. In this study, a method to visualize 3D geological models on Google Earth is presented. COLLAborative Design Activity (COLLADA, an open standard XML schema for establishing interactive 3D applications) was used to represent different 3D geological models such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes (a set of triangles connected by their common edges or corners). In addition, we designed Keyhole Markup Language (KML, the XML-based scripting language of Google Earth) codes to import the COLLADA files into the 3D render window of Google Earth. The method was applied to the Grosmont formation in Alberta, Canada. The application showed that the combination of COLLADA and KML enables Google Earth to effectively visualize 3D geological structures and properties.; Visualization of the (a) boreholes, (b) fence sections, (c) 3D volume model and (d) 3D grid model of Grossmont formation on Google Earth

  12. Visualization of 3D Geological Data using COLLADA and KML

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Um, Jeong-Gi; Park, Myong-Ho

    2013-04-01

    This study presents a method to visualize 3D geological data using COLLAborative Design Activity(COLLADA, an open standard XML schema for establishing interactive 3D applications) and Keyhole Markup Language(KML, the XML-based scripting language of Google Earth).We used COLLADA files to represent different 3D geological data such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes(a set of triangles connected by their common edges or corners). The COLLADA files were imported into the 3D render window of Google Earth using KML codes. An application to the Grosmont formation in Alberta, Canada showed that the combination of COLLADA and KML enables Google Earth to visualize 3D geological structures and properties.

  13. 3-D visualization in biomedical applications.

    PubMed

    Robb, R A

    1999-01-01

    Visualizable objects in biology and medicine extend across a vast range of scale, from individual molecules and cells through the varieties of tissue and interstitial interfaces to complete organs, organ systems, and body parts. These objects include functional attributes of these systems, such as biophysical, biomechanical, and physiological properties. Visualization in three dimensions of such objects and their functions is now possible with the advent of high-resolution tomographic scanners and imaging systems. Medical applications include accurate anatomy and function mapping, enhanced diagnosis, accurate treatment planning and rehearsal, and education/training. Biologic applications include study and analysis of structure-to-function relationships in individual cells and organelles. The potential for revolutionary innovation in the practice of medicine and in biologic investigations lies in direct, fully immersive, real-time multisensory fusion of real and virtual information data streams into online, real-time visualizations available during actual clinical procedures or biological experiments. Current high-performance computing, advanced image processing, and high-fidelity rendering capabilities have facilitated major progress toward realization of these goals. With these advances in hand, there are several important applications of three-dimensional visualization that will have a significant impact on the practice of medicine and on biological research.

  14. 3D visualization for research and teaching in geosciences

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad

    2010-05-01

    Today, we are provided with an abundance of visual images from a variety of sources. In doing research, data visualization represents an important part, and sophisticated models require special tools that should enhance the comprehension of modeling results. Also, helping our students gain visualization skills is an important way to foster greater comprehension when studying geosciences. For these reasons we build a 3D stereo-visualization system, or a GeoWall, that permits to explore in depth 3D modeling results and provide for students an attractive way for data visualization. In this study, we present the architecture of such low cost system, and how is used. The system consists of three main parts: a DLP-3D capable display, a high performance workstation and several pairs of wireless liquid crystal shutter eyewear. The system is capable of 3D stereo visualization of Google Earth and/or 3D numeric modeling results. Also, any 2D image or movie can be instantly viewed in 3D stereo. Such flexible-easy-to-use visualization system proved to be an essential research and teaching tool.

  15. [3D visualization and information interaction in biomedical applications].

    PubMed

    Pu, F; Fan, Y; Jiang, W; Zhang, M; Mak, A F; Chen, J

    2001-06-01

    3D visualization and virtual reality are important trend in the development of modern science and technology, and as well in the studies on biomedical engineering. This paper presents a computer procedure developed for 3D visualization in biomedical applications. The biomedical models are constructed in slice sequences based on polygon cells and information interaction is realized on the basis of OpenGL selection mode in particular consideration of the specialties in this field such as irregularity in geometry and complexity in material etc. The software developed has functions of 3D model construction and visualization, real-time modeling transformation, information interaction and so on. It could serve as useful platform for 3D visualization in biomedical engineering research.

  16. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  17. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  18. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  19. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  20. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  1. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  2. The effect of sound on visual fidelity perception in stereoscopic 3-D.

    PubMed

    Rojas, David; Kapralos, Bill; Hogue, Andrew; Collins, Karen; Nacke, Lennart; Cristancho, Sayra; Conati, Cristina; Dubrowski, Adam

    2013-12-01

    Visual and auditory cues are important facilitators of user engagement in virtual environments and video games. Prior research supports the notion that our perception of visual fidelity (quality) is influenced by auditory stimuli. Understanding exactly how our perception of visual fidelity changes in the presence of multimodal stimuli can potentially impact the design of virtual environments, thus creating more engaging virtual worlds and scenarios. Stereoscopic 3-D display technology provides the users with additional visual information (depth into and out of the screen plane). There have been relatively few studies that have investigated the impact that auditory stimuli have on our perception of visual fidelity in the presence of stereoscopic 3-D. Building on previous work, we examine the effect of auditory stimuli on our perception of visual fidelity within a stereoscopic 3-D environment.

  3. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  4. Automatic visualization of 3D geometry contained in online databases

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; John, Nigel W.

    2003-04-01

    In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.

  5. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  6. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  7. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.

  8. A workflow for the 3D visualization of meteorological data

    NASA Astrophysics Data System (ADS)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have

  9. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  10. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  11. Exploring Cultural Heritage Resources in a 3d Collaborative Environment

    NASA Astrophysics Data System (ADS)

    Respaldiza, A.; Wachowicz, M.; Vázquez Hoehne, A.

    2012-06-01

    Cultural heritage is a complex and diverse concept, which brings together a wide domain of information. Resources linked to a cultural heritage site may consist of physical artefacts, books, works of art, pictures, historical maps, aerial photographs, archaeological surveys and 3D models. Moreover, all these resources are listed and described by a set of a variety of metadata specifications that allow their online search and consultation on the most basic characteristics of them. Some examples include Norma ISO 19115, Dublin Core, AAT, CDWA, CCO, DACS, MARC, MoReq, MODS, MuseumDat, TGN, SPECTRUM, VRA Core and Z39.50. Gateways are in place to fit in these metadata standards into those used in a SDI (ISO 19115 or INSPIRE), but substantial work still remains to be done for the complete incorporation of cultural heritage information. Therefore, the aim of this paper is to demonstrate how the complexity of cultural heritage resources can be dealt with by a visual exploration of their metadata within a 3D collaborative environment. The 3D collaborative environments are promising tools that represent the new frontier of our capacity of learning, understanding, communicating and transmitting culture.

  12. 3D web visualization of huge CityGML models

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Devigili, F.; Soave, M.; Di Staso, U.; De Amicis, R.

    2015-08-01

    Nowadays, rapid technological development into acquiring geo-spatial information; joined to the capabilities to process these data in a relative short period of time, allows the generation of detailed 3D textured city models that will become an essential part of the modern city information infrastructure (Spatial Data Infrastructure) and, can be used to integrate various data from different sources for public accessible visualisation and many other applications. One of the main bottlenecks, which at the moment limit the use of these datasets to few experts, is a lack on efficient visualization systems through the web and interoperable frameworks that allow standardising the access to the city models. The work presented in this paper tries to satisfy these two requirements developing a 3D web-based visualization system based on OGC standards and effective visualization concepts. The architectural framework, based on Services Oriented Architecture (SOA) concepts, provides the 3D city data to a web client designed to support the view process in a very effective way. The first part of the work is to design a framework compliant to the 3D Portrayal Service drafted by the of the Open Geospatial Consortium (OGC) 3D standardization working group. The latter is related to the development of an effective web client able to render in an efficient way the 3D city models.

  13. 3D visualization of the human cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Zrimec, Tatjana; Mander, Tom; Lambert, Timothy; Parker, Geoffrey

    1995-04-01

    Computer assisted 3D visualization of the human cerebro-vascular system can help to locate blood vessels during diagnosis and to approach them during treatment. Our aim is to reconstruct the human cerebro-vascular system from the partial information collected from a variety of medical imaging instruments and to generate a 3D graphical representation. This paper describes a tool developed for 3D visualization of cerebro-vascular structures. It also describes a symbolic approach to modeling vascular anatomy. The tool, called Ispline, is used to display the graphical information stored in a symbolic model of the vasculature. The vascular model was developed to assist image processing and image fusion. The model consists of a structural symbolic representation using frames and a geometrical representation of vessel shapes and vessel topology. Ispline has proved to be useful for visualizing both the synthetically constructed vessels of the symbolic model and the vessels extracted from a patient's MR angiograms.

  14. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  15. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  16. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    SciTech Connect

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-09-15

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  17. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    visualization in OPNET . . . . . . . . . . . . 13 6. Sample NetViz visualization . . . . . . . . . . . . . . . . . . . 15 7. Realistic 3D terrains...scenario in OPNET . . . 19 10. OPNET 3DNV only displays connectivity . . . . . . . . . . . . 29 11. The digitally connected battlefield...confirmation tool 12 OPNET Optimized Network Evaluation Tool . . . . . . . . . . . . 13 NetViz Network Visualization

  18. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  19. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  20. Measuring the Visual Salience of 3D Printed Objects.

    PubMed

    Wang, Xi; Lindlbauer, David; Lessig, Christian; Maertens, Marianne; Alexa, Marc

    2016-01-01

    To investigate human viewing behavior on physical realizations of 3D objects, the authors use an eye tracker with scene camera and fiducial markers on 3D objects to gather fixations on the presented stimuli. They use this data to validate assumptions regarding visual saliency that so far have experimentally only been analyzed for flat stimuli. They provide a way to compare fixation sequences from different subjects and developed a model for generating test sequences of fixations unrelated to the stimuli. Their results suggest that human observers agree in their fixations for the same object under similar viewing conditions. They also developed a simple procedure to validate computational models for visual saliency of 3D objects and found that popular models of mesh saliency based on center surround patterns fail to predict fixations.

  1. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  2. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  3. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  4. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  5. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  6. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  7. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  8. A new visualization method for 3D head MRA data

    NASA Astrophysics Data System (ADS)

    Ohashi, Satoshi; Hatanaka, Masahiko

    2008-03-01

    In this paper, we propose a new visualization method for head MRA data which supports the user to easily determine the positioning of MPR images and/or MIP images based on the blood vessel network structure (the anatomic location of blood vessels). This visualization method has following features: (a) the blood vessel (cerebral artery) network structure in 3D head MRA data is portrayed the 3D line structure; (b) the MPR or MIP images are combined with the blood vessel network structure and displayed in a 3D visualization space; (c) the positioning of MPR or MIP is decided based on the anatomic location of blood vessels; (d) The image processing and drawing can be operated at real-time without a special hardware accelerator. As a result, we believe that our method is available to position MPR images or MIP images related to the blood vessel network structure. Moreover, we think that the user using this method can obtain the 3D information (position, angle, direction) of both these images and the blood vessel network structure.

  9. 3-D Visualization on Workspace of Parallel Manipulators

    NASA Astrophysics Data System (ADS)

    Tanaka, Yoshito; Yokomichi, Isao; Ishii, Junko; Makino, Toshiaki

    In parallel mechanisms, the form and volume of workspace also change variously with the attitude of a platform. This paper presents a method to search for the workspace of parallel mechanisms with 6-DOF and 3D visualization of the workspace. Workspace is a search for the movable range of the central point of a platform when it moves with a given orientation. In order to search workspace, geometric analysis based on inverse kinematics is considered. Plots of 2D of calculations are compared with those measured by position sensors. The test results are shown to have good agreement with simulation results. The workspace variations are demonstrated in terms of 3D and 2D plots for prototype mechanisms. The workspace plots are created with OpenGL and Visual C++ by implementation of the algorithm. An application module is developed, which displays workspace of the mechanism in 3D images. The effectiveness and practicability of 3D visualization on workspace are successfully demonstrated by 6-DOF parallel mechanisms.

  10. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  11. 3D visualization of gene clusters and networks

    NASA Astrophysics Data System (ADS)

    Zhang, Leishi; Sheng, Weiguo; Liu, Xiaohui

    2005-03-01

    In this paper, we try to provide a global view of DNA microarray gene expression data analysis and modeling process by combining novel and effective visualization techniques with data mining algorithms. An integrated framework has been proposed to model and visualize short, high-dimensional gene expression data. The framework reduces the dimensionality of variables before applying appropriate temporal modeling method. Prototype has been built using Java3D to visualize the framework. The prototype takes gene expression data as input, clusters the genes, displays the clustering results using a novel graph layout algorithm, models individual gene clusters using Dynamic Bayesian Network and then visualizes the modeling results using simple but effective visualization techniques.

  12. Toward mobile 3D visualization for structural biologists.

    PubMed

    Tanramluk, Duangrudee; Akavipat, Ruj; Charoensawan, Varodom

    2013-12-01

    Technological advances in crystallography have led to the ever-rapidly increasing number of biomolecular structures deposited in public repertoires. This undoubtedly shifts the bottleneck of structural biology research from obtaining high-quality structures to data analysis and interpretation. The recently available glasses-free autostereoscopic laptop offers an unprecedented opportunity to visualize and study 3D structures using a much more affordable, and for the first time, portable device. Together with a gamepad re-programmed for 3D structure controlling, we describe how the gaming technologies can deliver the output 3D images for high-quality viewing, comparable to that of a passive stereoscopic system, and can give the user more control and flexibility than the conventional controlling setup using only a mouse and a keyboard.

  13. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  14. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  15. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  16. 3-D visualization and animation technologies in anatomical imaging.

    PubMed

    McGhee, John

    2010-02-01

    This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.

  17. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  18. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  19. NASA VERVE: Interactive 3D Visualization Within Eclipse

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar; Allan, Mark B.

    2014-01-01

    At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.

  20. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  1. Breast tumour visualization using 3D quantitative ultrasound methods

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Raheem, Abdul; Tadayyon, Hadi; Liu, Simon; Hadizad, Farnoosh; Czarnota, Gregory J.

    2016-04-01

    Breast cancer is one of the most common cancer types accounting for 29% of all cancer cases. Early detection and treatment has a crucial impact on improving the survival of affected patients. Ultrasound (US) is non-ionizing, portable, inexpensive, and real-time imaging modality for screening and quantifying breast cancer. Due to these attractive attributes, the last decade has witnessed many studies on using quantitative ultrasound (QUS) methods in tissue characterization. However, these studies have mainly been limited to 2-D QUS methods using hand-held US (HHUS) scanners. With the availability of automated breast ultrasound (ABUS) technology, this study is the first to develop 3-D QUS methods for the ABUS visualization of breast tumours. Using an ABUS system, unlike the manual 2-D HHUS device, the whole patient's breast was scanned in an automated manner. The acquired frames were subsequently examined and a region of interest (ROI) was selected in each frame where tumour was identified. Standard 2-D QUS methods were used to compute spectral and backscatter coefficient (BSC) parametric maps on the selected ROIs. Next, the computed 2-D parameters were mapped to a Cartesian 3-D space, interpolated, and rendered to provide a transparent color-coded visualization of the entire breast tumour. Such 3-D visualization can potentially be used for further analysis of the breast tumours in terms of their size and extension. Moreover, the 3-D volumetric scans can be used for tissue characterization and the categorization of breast tumours as benign or malignant by quantifying the computed parametric maps over the whole tumour volume.

  2. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  3. Using 3D Interactive Visualizations In Teacher Workshops

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Cooper, I.; de Groot, R.; Shindle, W.; Mellors, R.; Benthien, M.

    2004-12-01

    Extending Earth Science learning activities from 2D to 3D was central to this year's second annual Teacher Education Workshop, which was held at the Scripps Institution of Oceanography's Visualization Center (SIO VizCenter; http://siovizcenter.ucsd.edu/). Educational specialists and researchers from several institutions led this collaborative workshop , which was supported by the Southern California Earthquake Center (SCEC; http://www.scec.org/education), the U.S. Geological Survey (USGS), the SIO VizCenter, San Diego State University (SDSU) and the Incorporated Research Institutions for Seismology (IRIS). The workshop was the latest in a series of teacher workshops run by SCEC and the USGS with a focus on earthquakes and seismic hazard. A particular emphasis of the 2004 workshop was the use of sophisticated computer visualizations that easily illustrated geospatial relationships. These visualizations were displayed on a large wall-sized curved screen, which allowed the workshop participants to be literally immersed in the images being discussed. In this way, the teachers explored current geoscience datasets in a novel and interactive fashion, which increased their understanding of basic concepts relevant to the national science education standards and alleviated some of their misconceptions. For example, earthquake hypocenter data were viewed in interactive 3D and the teachers immediately understood that: (1) The faults outlined by the earthquake locations are 3D planes, not 2D lines; (2) The earthquakes map out plate tectonic boundaries, where the 3D structure of some boundaries are more complex than others; (3) The deepest earthquakes occur in subduction zones, whereas transform and divergent plate boundaries tend to have shallower quakes. A major advantage is that these concepts are immediately visible in 3D and do not require elaborate explanations, as is often necessary with traditional 2D maps. This enhances the teachers' understanding in an efficient and

  4. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  5. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  6. 3D-printer visualization of neuron models.

    PubMed

    McDougal, Robert A; Shepherd, Gordon M

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  7. 3D-printer visualization of neuron models

    PubMed Central

    McDougal, Robert A.; Shepherd, Gordon M.

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases. PMID:26175684

  8. Advances in 3D visualization of air quality data

    NASA Astrophysics Data System (ADS)

    San José, R.; Pérez, J. L.; González, R. M.

    2012-10-01

    The air quality models produce a considerable amount of data, raw data can be hard to conceptualize, particularly when the size of the data sets can be terabytes, so to understand the atmospheric processes and consequences of air pollution it is necessary to analyse the results of the air pollution simulations. The basis of the development of the visualization is shaped by the requirements of the different group of users. We show different possibilities to represent 3D atmospheric data and geographic data. We present several examples developed with IDV software, which is a generic tool that can be used directly with the simulation results. The rest of solutions are specific applications developed by the authors which are the integration of different tools and technologies. In the case of the buildings has been necessary to make a 3D model from the buildings data using COLLADA standard format. In case of the Google Earth approach, for the atmospheric part we use Ferret software. In the case of gvSIG.-3D for the atmospheric visualization we have used different geometric figures available: "QuadPoints", "Polylines", "Spheres" and isosurfaces. The last one is also displayed following the VRML standard.

  9. User benefits of visualization with 3-D stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Wichansky, Anna M.

    1991-08-01

    The power of today''s supercomputers promises tremendous benefits to users in terms of productivity, creativity, and excitement in computing. A study of a stereoscopic display system for computer workstations was conducted with 20 users and third-party software developers, to determine whether 3-D stereo displays were perceived as better than flat, 2- 1/2D displays. Users perceived more benefits of 3-D stereo in applications such as molecular modeling and cell biology, which involved viewing of complex, abstract, amorphous objects. Users typically mentioned clearer visualization and better understanding of data, easier recognition of form and pattern, and more fun and excitement at work as the chief benefits of stereo displays. Human factors issues affecting the usefulness of stereo included use of 3-D glasses over regular eyeglasses, difficulties in group viewing, lack of portability, and need for better input devices. The future marketability of 3-D stereo displays would be improved by eliminating the need for users to wear equipment, reducing cost, and identifying markets where the abstract display value can be maximized.

  10. Comparative visual analysis of 3D urban wind simulations

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Salim, Mohamed; Grawe, David; Leitl, Bernd; Böttinger, Michael; Schlünzen, Heinke

    2016-04-01

    Climate simulations are conducted in large quantity for a variety of different applications. Many of these simulations focus on global developments and study the Earth's climate system using a coupled atmosphere ocean model. Other simulations are performed on much smaller regional scales, to study very small fine grained climatic effects. These microscale climate simulations pose similar, yet also different, challenges for the visualization and the analysis of the simulation data. Modern interactive visualization and data analysis techniques are very powerful tools to assist the researcher in answering and communicating complex research questions. This presentation discusses comparative visualization for several different wind simulations, which were created using the microscale climate model MITRAS. The simulations differ in wind direction and speed, but are all centered on the same simulation domain: An area of Hamburg-Wilhelmsburg that hosted the IGA/IBA exhibition in 2013. The experiments contain a scenario case to analyze the effects of single buildings, as well as examine the impact of the Coriolis force within the simulation. The scenario case is additionally compared with real measurements from a wind tunnel experiment to ascertain the accuracy of the simulation and the model itself. We also compare different approaches for tree modeling and evaluate the stability of the model. In this presentation, we describe not only our workflow to efficiently and effectively visualize microscale climate simulation data using common 3D visualization and data analysis techniques, but also discuss how to compare variations of a simulation and how to highlight the subtle differences in between them. For the visualizations we use a range of different 3D tools that feature techniques for statistical data analysis, data selection, as well as linking and brushing.

  11. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  12. Visualization and Analysis of 3D Gene Expression Data

    SciTech Connect

    Bethel, E. Wes; Rubel, Oliver; Weber, Gunther H.; Hamann, Bernd; Hagen, Hans

    2007-10-25

    Recent methods for extracting precise measurements ofspatial gene expression patterns from three-dimensional (3D) image dataopens the way for new analysis of the complex gene regulatory networkscontrolling animal development. To support analysis of this novel andhighly complex data we developed PointCloudXplore (PCX), an integratedvisualization framework that supports dedicated multi-modal, physical andinformation visualization views along with algorithms to aid in analyzingthe relationships between gene expression levels. Using PCX, we helpedour science stakeholders to address many questions in 3D gene expressionresearch, e.g., to objectively define spatial pattern boundaries andtemporal profiles of genes and to analyze how mRNA patterns arecontrolled by their regulatory transcription factors.

  13. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  14. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  15. Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL

    DTIC Science & Technology

    2011-08-01

    into the Visual Anatomical Injury Descriptor (Visual AID) Using WebGL AUTUMN KULAGA MENTOR: PATRICK GILLICH WARFIGHTER SURVIVABILITY BRANCH...Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL 5a. CONTRACT...discusses the Web-based 3-D environment prototype being developed to understand the feasibility of integrating WebGL into Visual AID. Using WebGL will

  16. Network-based visualization of 3D landscapes and city models.

    PubMed

    Royan, Jérôme; Gioia, Patrick; Cavagna, Romain; Bouville, Christian

    2007-01-01

    To improve the visualization of large 3D landscapes and city models in a network environment, the authors use two different types of hierarchical level-of-detail models for terrain and groups of buildings. They also leverage the models to implement progressive streaming in both client-server and peer-to-peer network architectures.

  17. Interactive 3D visualization speeds well, reservoir planning

    SciTech Connect

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinite reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.

  18. Visual discomfort caused by color asymmetry in 3D displays

    NASA Astrophysics Data System (ADS)

    Chen, Zaiqing; Huang, Xiaoqiao; Tai, Yonghan; Shi, Junsheng; Yun, Lijun

    2016-10-01

    Color asymmetry is a common phenomenon in 3D displays, which can cause serious visual discomfort. To ensure safe and comfortable stereo viewing, the color difference between the left and right eyes should not exceed a threshold value, named comfortable color difference limit (CCDL). In this paper, we have experimentally measured the CCDL for five sample color points which were selected from the 1976 CIE u'v' chromaticity diagram. By human observers viewing brief presentations of color asymmetry image pairs, a psychophysical experiment is conducted. As the color asymmetry image pairs, left and right circular patches are horizontally adjusted on image pixels with five levels of disparities: 0, ±60, ±120 arc minutes, along six color directions. The experimental results showed that CCDLs for each sample point varied with the level of disparity and color direction. The minimum of CCDL is 0.019Δu' v' , and the maximum of CCDL is 0.133 Δu' v'. The database collected in this study might help 3D system design and 3D content creation.

  19. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  20. SERVIR Viz: A 3D Visualization Tool for Mesoamerica.

    NASA Astrophysics Data System (ADS)

    Mercurio, M.; Coughlin, J.; Deneau, D.

    2007-05-01

    SERVIR Viz is a customized version of NASA's WorldWind, which is a freely distributed, open-source, web- enabled, 3D earth exploration tool. IAGT developed SERVIR Viz in a joint effort with SERVIR research partners to create a visualization framework for geospatial data resources available to the SERVIR project. SERVIR Viz is customized by providing users with newly developed custom tools, enhancements to existing open source tools and a specialized toolbar that allows shortcut access to existing tools. Another key feature is the ability to visualize remotely-hosted framework GIS data layers, maps, real-time satellite images, and other SERVIR products relevant to the Mesoamerica region using the NASA WorldWind visualization engine and base mapping layers. The main users of SERVIR Viz are the seven countries of Mesoamerica, SERVIR participants, educators, scientists, decision-makers, and the general public. SERVIR Viz enhances the SERVIR project infrastructure by providing access to NASA GEOSS data products and internet served Mesoamerica centric GIS data products within a tool developed specifically to promote use of GIS and visualization technologies in the decision support goals of the SERVIR project. In addition, SERVIZ Viz can be toggled between English and Spanish to support a wide cross section of users and development still continues to support new data and user requirements. This presentation will include a live demonstration of SERVIR Viz.

  1. Visualization of large scale geologically related data in virtual 3D scenes with OpenGL

    NASA Astrophysics Data System (ADS)

    Seng, Dewen; Liang, Xi; Wang, Hongxia; Yue, Guoying

    2007-11-01

    This paper demonstrates a method for three-dimensional (3D) reconstruction and visualization of large scale multidimensional surficial, geological and mine planning data with the programmable visualization environment OpenGL. A simulation system developed by the authors is presented for importing, filtering and visualizing of multidimensional geologically related data. The approach for the visual simulation of complicated mining engineering environment implemented in the system is described in detail. Aspects like presentations of multidimensional data with spatial dependence, navigation in the surficial and geological frame of reference and in time, interaction techniques are presented. The system supports real 3D landscape representations. Furthermore, the system provides many visualization methods for rendering multidimensional data within virtual 3D scenes and combines them with several navigation techniques. Real data derived from an iron mine in Wuhan City of China demonstrates the effectiveness and efficiency of the system. A case study with the results and benefits achieved by using real 3D representations and navigations of the system is given.

  2. JHelioviewer: Visualizing the Sun and Heliosphere in 3D

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Spoerri, S.; Pagel, S.

    2012-12-01

    The next generation of heliophysics missions, Solar Orbiter and Solar Probe Plus, will focus on exploring the linkage between the Sun and the heliosphere. These new missions will collect unique data that will allow us to study, e.g., the coupling between macroscopic physical processes to those on kinetic scales, the generation of solar energetic particles and their propagation into the heliosphere and the origin and acceleration of solar wind plasma. Already today, NASA's Solar Dynamics Observatory returns 1.4 TB/day of high-resolution solar images, magnetograms and EUV irradiance data. Within a few years, the scientific community will thus have access to petabytes of multidimensional remote-sensinng and complex in-situ observations from different vantage points, complemented by petabytes of simulation data. Answering overarching science questions like "How do solar transients drive heliospheric variability and space weather?" will only be possible if the community has the necessary tools at hand. As of today, there is an obvious lack of capability to both visualize these data and assimilate them into sophisticated models to advance our knowledge. A key piece needed to bridge the gap between observables, derived quantities like vector fields and model output is a tool to routinely and intuitively visualize large heterogeneous, multidimensional, time-dependent data sets. While a few tools exist to visualize, e.g., 3D data sets for a small number of time steps, the space sciences community is lacking the equipment to do this (i) on a routine basis, (ii) for complex multidimensional data sets from various instruments and vantage points and (iii) in an extensible and modular way that is open for future improvements and interdisciplinary usage. In this contribution, we will present recent progress in visualizing the Sun and its magnetic field in 3D using the open-source JHelioviewer framework, which is part of the ESA/NASA Helioviewer Project. Among other features

  3. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  4. Measuring Knowledge Acquisition in 3D Virtual Learning Environments.

    PubMed

    Nunes, Eunice P dos Santos; Roque, Licínio G; Nunes, Fatima de Lourdes dos Santos

    2016-01-01

    Virtual environments can contribute to the effective learning of various subjects for people of all ages. Consequently, they assist in reducing the cost of maintaining physical structures of teaching, such as laboratories and classrooms. However, the measurement of how learners acquire knowledge in such environments is still incipient in the literature. This article presents a method to evaluate the knowledge acquisition in 3D virtual learning environments (3D VLEs) by using the learner's interactions in the VLE. Three experiments were conducted that demonstrate the viability of using this method and its computational implementation. The results suggest that it is possible to automatically assess learning in predetermined contexts and that some types of user interactions in 3D VLEs are correlated with the user's learning differential.

  5. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  6. 3D panorama stereo visual perception centering on the observers

    NASA Astrophysics Data System (ADS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-09-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality.

  7. Visualizing 3D Fracture Morphology in Granular Media

    NASA Astrophysics Data System (ADS)

    Dalbe, M. J.; Juanes, R.

    2015-12-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  8. Visualizing 3D fracture morphology in granular media

    NASA Astrophysics Data System (ADS)

    Dalbe, Marie-Julie; Juanes, Ruben

    2015-11-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  9. Visualizing 3D velocity fields near contour surfaces

    SciTech Connect

    Max, N.; Crawfis, R.; Grant, C.

    1994-03-01

    Vector field rendering is difficult in 3D because the vector icons overlap and hide each other. We propose four different techniques for visualizing vector fields only near surfaces. The first uses motion blurred particles in a thickened region around the surface. The second uses a voxel grid to contain integral curves of the vector field. The third uses many antialiased lines through the surface, and the fourth uses hairs sprouting from the surface and then bending in the direction of the vector field. All the methods use the graphite pipeline, allowing real time rotation and interaction, and the first two methods can animate the texture to move in the flow determined by the velocity field.

  10. Game-Like Language Learning in 3-D Virtual Environments

    ERIC Educational Resources Information Center

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  11. Automatic 3-D Point Cloud Classification of Urban Environments

    DTIC Science & Technology

    2008-12-01

    paper, we address the problem of automated interpretation of 3-D point clouds from scenes of urban and natural environments; our analysis is...over 10 km of traverse. We implemented three geometric features com- monly used in spectral analysis of point clouds . We de- fine λ2 ≥ λ1 ≥ λ0 to be

  12. 3D recovery of human gaze in natural environments

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  13. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  14. Trans3D: a free tool for dynamical visualization of EEG activity transmission in the brain.

    PubMed

    Blinowski, Grzegorz; Kamiński, Maciej; Wawer, Dariusz

    2014-08-01

    The problem of functional connectivity in the brain is in the focus of attention nowadays, since it is crucial for understanding information processing in the brain. A large repertoire of measures of connectivity have been devised, some of them being capable of estimating time-varying directed connectivity. Hence, there is a need for a dedicated software tool for visualizing the propagation of electrical activity in the brain. To this aim, the Trans3D application was developed. It is an open access tool based on widely available libraries and supporting both Windows XP/Vista/7(™), Linux and Mac environments. Trans3D can create animations of activity propagation between electrodes/sensors, which can be placed by the user on the scalp/cortex of a 3D model of the head. Various interactive graphic functions for manipulating and visualizing components of the 3D model and input data are available. An application of the Trans3D tool has helped to elucidate the dynamics of the phenomena of information processing in motor and cognitive tasks, which otherwise would have been very difficult to observe. Trans3D is available at: http://www.eeg.pl/.

  15. 3D Orbit Visualization for Earth-Observing Missions

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.

    2011-01-01

    This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.

  16. In vitro analysis of chemotactic leukocyte migration in 3D environments.

    PubMed

    Sixt, Michael; Lämmermann, Tim

    2011-01-01

    Cell migration on two-dimensional (2D) substrates follows entirely different rules than cell migration in three-dimensional (3D) environments. This is especially relevant for leukocytes that are able to migrate in the absence of adhesion receptors within the confined geometry of artificial 3D extracellular matrix scaffolds and within the interstitial space in vivo. Here, we describe in detail a simple and economical protocol to visualize dendritic cell migration in 3D collagen scaffolds along chemotactic gradients. This method can be adapted to other cell types and may serve as a physiologically relevant paradigm for the directed locomotion of most amoeboid cells.

  17. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  18. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  19. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  20. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    EPA Science Inventory

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  1. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  2. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  3. Scalable 3D GIS environment managed by 3D-XML-based modeling

    NASA Astrophysics Data System (ADS)

    Shi, Beiqi; Rui, Jianxun; Chen, Neng

    2008-10-01

    Nowadays, the namely 3D GIS technologies become a key factor in establishing and maintaining large-scale 3D geoinformation services. However, with the rapidly increasing size and complexity of the 3D models being acquired, a pressing needed for suitable data management solutions has become apparent. This paper outlines that storage and exchange of geospatial data between databases and different front ends like 3D models, GIS or internet browsers require a standardized format which is capable to represent instances of 3D GIS models, to minimize loss of information during data transfer and to reduce interface development efforts. After a review of previous methods for spatial 3D data management, a universal lightweight XML-based format for quick and easy sharing of 3D GIS data is presented. 3D data management based on XML is a solution meeting the requirements as stated, which can provide an efficient means for opening a new standard way to create an arbitrary data structure and share it over the Internet. To manage reality-based 3D models, this paper uses 3DXML produced by Dassault Systemes. 3DXML uses opening XML schemas to communicate product geometry, structure and graphical display properties. It can be read, written and enriched by standard tools; and allows users to add extensions based on their own specific requirements. The paper concludes with the presentation of projects from application areas which will benefit from the functionality presented above.

  4. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  5. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  6. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components.

  7. 3D Laser Triangulation for Plant Phenotyping in Challenging Environments

    PubMed Central

    Kjaer, Katrine Heinsvig; Ottosen, Carl-Otto

    2015-01-01

    To increase the understanding of how the plant phenotype is formed by genotype and environmental interactions, simple and robust high-throughput plant phenotyping methods should be developed and considered. This would not only broaden the application range of phenotyping in the plant research community, but also increase the ability for researchers to study plants in their natural environments. By studying plants in their natural environment in high temporal resolution, more knowledge on how multiple stresses interact in defining the plant phenotype could lead to a better understanding of the interaction between plant responses and epigenetic regulation. In the present paper, we evaluate a commercial 3D NIR-laser scanner (PlantEye, Phenospex B.V., Herleen, The Netherlands) to track daily changes in plant growth with high precision in challenging environments. Firstly, we demonstrate that the NIR laser beam of the scanner does not affect plant photosynthetic performance. Secondly, we demonstrate that it is possible to estimate phenotypic variation amongst the growth pattern of ten genotypes of Brassica napus L. (rapeseed), using a simple linear correlation between scanned parameters and destructive growth measurements. Our results demonstrate the high potential of 3D laser triangulation for simple measurements of phenotypic variation in challenging environments and in a high temporal resolution. PMID:26066990

  8. 3D Laser Triangulation for Plant Phenotyping in Challenging Environments.

    PubMed

    Kjaer, Katrine Heinsvig; Ottosen, Carl-Otto

    2015-06-09

    To increase the understanding of how the plant phenotype is formed by genotype and environmental interactions, simple and robust high-throughput plant phenotyping methods should be developed and considered. This would not only broaden the application range of phenotyping in the plant research community, but also increase the ability for researchers to study plants in their natural environments. By studying plants in their natural environment in high temporal resolution, more knowledge on how multiple stresses interact in defining the plant phenotype could lead to a better understanding of the interaction between plant responses and epigenetic regulation. In the present paper, we evaluate a commercial 3D NIR-laser scanner (PlantEye, Phenospex B.V., Herleen, The Netherlands) to track daily changes in plant growth with high precision in challenging environments. Firstly, we demonstrate that the NIR laser beam of the scanner does not affect plant photosynthetic performance. Secondly, we demonstrate that it is possible to estimate phenotypic variation amongst the growth pattern of ten genotypes of Brassica napus L. (rapeseed), using a simple linear correlation between scanned parameters and destructive growth measurements. Our results demonstrate the high potential of 3D laser triangulation for simple measurements of phenotypic variation in challenging environments and in a high temporal resolution.

  9. Modeling Extracellular Matrix Reorganization in 3D Environments

    PubMed Central

    Harjanto, Dewi; Zaman, Muhammad H.

    2013-01-01

    Extracellular matrix (ECM) remodeling is a key physiological process that occurs in a number of contexts, including cell migration, and is especially important for cellular form and function in three-dimensional (3D) matrices. However, there have been few attempts to computationally model how cells modify their environment in a manner that accounts for both cellular properties and the architecture of the surrounding ECM. To this end, we have developed and validated a novel model to simulate matrix remodeling that explicitly defines cells in a 3D collagenous matrix. In our simulation, cells can degrade, deposit, or pull on local fibers, depending on the fiber density around each cell. The cells can also move within the 3D matrix. Different cell phenotypes can be modeled by varying key cellular parameters. Using the model we have studied how two model cancer cell lines, of differing invasiveness, modify matrices with varying fiber density in their vicinity by tracking the metric of fraction of matrix occupied by fibers. Our results quantitatively demonstrate that in low density environments, cells deposit more collagen to uniformly increase fibril fraction. On the other hand, in higher density environments, the less invasive model cell line reduced the fibril fraction as compared to the highly invasive phenotype. These results show good qualitative and quantitative agreement with existing experimental literature. Our simulation is therefore able to function as a novel platform to provide new insights into the clinically relevant and physiologically critical process of matrix remodeling by helping identify critical parameters that dictate cellular behavior in complex native-like environments. PMID:23341900

  10. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  11. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  12. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  13. Visualization of gravitational potential wells using 3D printing technology

    NASA Astrophysics Data System (ADS)

    Su, Jun; Wang, Weiguo; Lu, Meishu; Xu, Xinran; Yan, Qi Fan; Lu, Jianlong

    2016-12-01

    There have been many studies of the dynamics of a ball rolling on different types of surfaces. Most of these studies have been theoretical, with only a few experimental. We have found that 3D printing offers a novel experimental approach to investigating this topic. In this paper, we use a 3D printer to create four different surfaces and experimentally investigate the dynamics of a ball rolling on these surfaces. Our results are then compared to theory.

  14. 3D Visualizations of Abstract DataSets

    DTIC Science & Technology

    2010-08-01

    5000 Ultimate Free SDK, $2000; Single Seat $4000 Basic $250, Professional $1000, Studio $ 3000 Kind of Free Overall UI (ease of use) 5...full, high definition ( HD ) stereoscopic 3D display. It works by synching LCD wireless active shutter glasses through an IR emitter and advanced...software, to a Samsung SyncMaster 2233RZ, 120 Hz, LCD display that provides the full HD stereoscopic 3D. Subjects will view the display, seated at the

  15. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome.

    PubMed

    Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten

    2014-01-01

    The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions.

  16. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  17. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  18. Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.

    2016-06-01

    Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  19. Texture mapping 3D models of indoor environments with noisy camera poses

    NASA Astrophysics Data System (ADS)

    Cheng, Peter; Anderson, Michael; He, Stewart; Zakhor, Avideh

    2013-03-01

    Automated 3D modeling of building interiors is used in applications such as virtual reality and environment mapping. Texturing these models allows for photo-realistic visualizations of the data collected by such modeling systems. While data acquisition times for mobile mapping systems are considerably shorter than for static ones, their recovered camera poses often suffer from inaccuracies, resulting in visible discontinuities when successive images are projected onto a surface for texturing. We present a method for texture mapping models of indoor environments that starts by selecting images whose camera poses are well-aligned in two dimensions. We then align images to geometry as well as to each other, producing visually consistent textures even in the presence of inaccurate surface geometry and noisy camera poses. Images are then composited into a final texture mosaic and projected onto surface geometry for visualization. The effectiveness of the proposed method is demonstrated on a number of different indoor environments.

  20. 3D visualization of membrane failures in fuel cells

    NASA Astrophysics Data System (ADS)

    Singh, Yadvinder; Orfino, Francesco P.; Dutta, Monica; Kjeang, Erik

    2017-03-01

    Durability issues in fuel cells, due to chemical and mechanical degradation, are potential impediments in their commercialization. Hydrogen leak development across degraded fuel cell membranes is deemed a lifetime-limiting failure mode and potential safety issue that requires thorough characterization for devising effective mitigation strategies. The scope and depth of failure analysis has, however, been limited by the 2D nature of conventional imaging. In the present work, X-ray computed tomography is introduced as a novel, non-destructive technique for 3D failure analysis. Its capability to acquire true 3D images of membrane damage is demonstrated for the very first time. This approach has enabled unique and in-depth analysis resulting in novel findings regarding the membrane degradation mechanism; these are: significant, exclusive membrane fracture development independent of catalyst layers, localized thinning at crack sites, and demonstration of the critical impact of cracks on fuel cell durability. Evidence of crack initiation within the membrane is demonstrated, and a possible new failure mode different from typical mechanical crack development is identified. X-ray computed tomography is hereby established as a breakthrough approach for comprehensive 3D characterization and reliable failure analysis of fuel cell membranes, and could readily be extended to electrolyzers and flow batteries having similar structure.

  1. 3D Display Calibration by Visual Pattern Analysis.

    PubMed

    Hwang, Hyoseok; Chang, Hyun Sung; Nam, Dongkyung; Kweon, In So

    2017-02-06

    Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2 s for computation; and is robust to noise, working well in the SNR regime as low as 6 dB.

  2. Disentangling the intragroup HI in Compact Groups of galaxies by means of X3D visualization

    NASA Astrophysics Data System (ADS)

    Verdes-Montenegro, Lourdes; Vogt, Frederic; Aubery, Claire; Duret, Laetitie; Garrido, Julián; Sánchez, Susana; Yun, Min S.; Borthakur, Sanchayeeta; Hess, Kelley; Cluver, Michelle; Del Olmo, Ascensión; Perea, Jaime

    2017-03-01

    As an extreme kind of environment, Hickson Compact groups (HCGs) have shown to be very complex systems. HI-VLA observations revealed an intrincated network of HI tails and bridges, tracing pre-processing through extreme tidal interactions. We found HCGs to show a large HI deficiency supporting an evolutionary sequence where gas-rich groups transform via tidal interactions and ISM (interstellar medium) stripping into gas-poor systems. We detected as well a diffuse HI component in the groups, increasing with evolutionary phase, although with uncertain distribution. The complex net of detected HI as observed with the VLA seems hence so puzzling as the missing one. In this talk we revisit the existing VLA information on the HI distribution and kinematics of HCGs by means of X3D visualization. X3D constitutes a powerful tool to extract the most from HI data cubes and a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3-D) diagrams.

  3. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  4. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    obturateur ACL, coûte environ 2 000 $. Le présent projet vise à étudier la technologie d’affichage et les techniques de visualisation de données 3D...consommation, combinés à la concurrence, continueront à faire baisser les prix et améliorer les performances de la technologie 3D. Le présent projet vise à...travail, les technologies sont classées comme suit : les lunettes à obturateur ACL sont les meilleures, elles sont suivies par les stéréogrammes à

  5. ProSAT+: visualizing sequence annotations on 3D structure.

    PubMed

    Stank, Antonia; Richter, Stefan; Wade, Rebecca C

    2016-08-01

    PRO: tein S: tructure A: nnotation T: ool-plus (ProSAT(+)) is a new web server for mapping protein sequence annotations onto a protein structure and visualizing them simultaneously with the structure. ProSAT(+) incorporates many of the features of the preceding ProSAT and ProSAT2 tools but also provides new options for the visualization and sharing of protein annotations. Data are extracted from the UniProt KnowledgeBase, the RCSB PDB and the PDBe SIFTS resource, and visualization is performed using JSmol. User-defined sequence annotations can be added directly to the URL, thus enabling visualization and easy data sharing. ProSAT(+) is available at http://prosat.h-its.org.

  6. Localizing Protein in 3D Neural Stem Cell Culture: a Hybrid Visualization Methodology

    PubMed Central

    Fai, Stephen; Bennett, Steffany A.L.

    2010-01-01

    The importance of 3-dimensional (3D) topography in influencing neural stem and progenitor cell (NPC) phenotype is widely acknowledged yet challenging to study. When dissociated from embryonic or post-natal brain, single NPCs will proliferate in suspension to form neurospheres. Daughter cells within these cultures spontaneously adopt distinct developmental lineages (neurons, oligodendrocytes, and astrocytes) over the course of expansion despite being exposed to the same extracellular milieu. This progression recapitulates many of the stages observed over the course of neurogenesis and gliogenesis in post-natal brain and is often used to study basic NPC biology within a controlled environment. Assessing the full impact of 3D topography and cellular positioning within these cultures on NPC fate is, however, difficult. To localize target proteins and identify NPC lineages by immunocytochemistry, free-floating neurospheres must be plated on a substrate or serially sectioned. This processing is required to ensure equivalent cell permeabilization and antibody access throughout the sphere. As a result, 2D epifluorescent images of cryosections or confocal reconstructions of 3D Z-stacks can only provide spatial information about cell position within discrete physical or digital 3D slices and do not visualize cellular position in the intact sphere. Here, to reiterate the topography of the neurosphere culture and permit spatial analysis of protein expression throughout the entire culture, we present a protocol for isolation, expansion, and serial sectioning of post-natal hippocampal neurospheres suitable for epifluorescent or confocal immunodetection of target proteins. Connexin29 (Cx29) is analyzed as an example. Next, using a hybrid of graphic editing and 3D modelling softwares rigorously applied to maintain biological detail, we describe how to re-assemble the 3D structural positioning of these images and digitally map labelled cells within the complete neurosphere. This

  7. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  8. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  9. KENO3D Visualization Tool for KENO V.a and KENO-VI Geometry Models

    SciTech Connect

    Horwedel, J.E.; Bowman, S.M.

    2000-06-01

    Criticality safety analyses often require detailed modeling of complex geometries. Effective visualization tools can enhance checking the accuracy of these models. This report describes the KENO3D visualization tool developed at the Oak Ridge National Laboratory (ORNL) to provide visualization of KENO V.a and KENO-VI criticality safety models. The development of KENO3D is part of the current efforts to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system.

  10. New software for visualizing 3D geological data in coal mines

    NASA Astrophysics Data System (ADS)

    Lee, Sungjae; Choi, Yosoon

    2015-04-01

    This study developed new software to visualize 3D geological data in coal mines. The Visualization Tool Kit (VTK) library and Visual Basic.NET 2010 were used to implement the software. The software consists of several modules providing functionalities: (1) importing and editing borehole data; (2) modelling of coal seams in 3D; (3) modelling of coal properties using 3D ordinary Kriging method; (4) calculating economical values of 3D blocks; (5) pit boundary optimization for identifying economical coal reserves based on the Lerchs-Grosmann algorithm; and (6) visualizing 3D geological, geometrical and economical data. The software has been applied to a small-scale open-pit coal mine in Indonesia revealed that it can provide useful information supporting the planning and design of open-pit coal mines.

  11. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  12. 3D visualization of unsteady 2D airplane wake vortices

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Zheng, Z. C.

    1994-01-01

    Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.

  13. Three-dimensional (3D) shadowgraph technique visualizes thermal convection

    NASA Astrophysics Data System (ADS)

    Huang, Jinzi; Zhang, Jun; Physics; Maths Research Institutes, NYU Shanghai Team; Applied Maths Lab, NYU Team

    2016-11-01

    Shadowgraph technique has been widely used in thermal convection, and in other types of convection and advection processes in fluids. The technique reveals minute density differences in the fluid, which is otherwise transparent to the eyes and to light-sensitive devices. However, such technique normally integrates the fluid information along the depth of view and collapses the 3D density field onto a 2D plane. In this work, we introduce a stereoscopic shadowgraph technique that preserves the information of the fluid depth by using two cross-field shadowgraphs. The two shadowgraphs are coded with different and complementary colors, and each is seen by only one eye of the viewer. The two shadowgraphs can also be temporally modulated to achieve the same stereoscopic vision of the convective fluid. We further discuss ways to make use of this technique in order to extract useful information for research in fluids.

  14. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  15. ProteinVista: a fast molecular visualization system using Microsoft Direct3D.

    PubMed

    Park, Chan-Yong; Park, Sung-Hee; Park, Soo-Jun; Park, Sun-Hee; Hwang, Chi-Jung

    2008-09-01

    Many tools have been developed to visualize protein and molecular structures. Most high quality protein visualization tools use the OpenGL graphics library as a 3D graphics system. Currently, the performance of recent 3D graphics hardware has rapidly improved. Recent high-performance 3D graphics hardware support Microsoft Direct3D graphics library more than OpenGL and have become very popular in personal computers (PCs). In this paper, a molecular visualization system termed ProteinVista is proposed. ProteinVista is well-designed visualization system using the Microsoft Direct3D graphics library. It provides various visualization styles such as the wireframe, stick, ball and stick, space fill, ribbon, and surface model styles, in addition to display options for 3D visualization. As ProteinVista is optimized for recent 3D graphics hardware platforms and because it uses a geometry instancing technique, its rendering speed is 2.7 times faster compared to other visualization tools.

  16. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  17. New techniques in 3D scalar and vector field visualization

    SciTech Connect

    Max, N.; Crawfis, R.; Becker, B.

    1993-05-05

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ``splatting`` scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ``flow volume`` of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity.

  18. Interactive Visualization of 3D Medical Data. Revision

    DTIC Science & Technology

    1989-04-01

    difficult and error-prone. It has long been recognized that computer-generated imagery might be an effective means for presenting three-dimensional...or ten years. ., App,. T- a. iuEE: Cm!te,.AL-: " D’t-,,:-. r : T ) P0Jr in IEEE C ~rnputer , Auqust -- 1989.- m. m m I j! RENDERING TECHNIQUES Three...ineractive setting. Initial visualizations made without the benefit of object definition would he used to guide scene analysis and segmentation algorithms

  19. Advanced Visualization and Analysis of Climate Data using DV3D and UV-CDAT

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.

    2012-12-01

    This paper describes DV3D, a Vistrails package of high-level modules for the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) interactive visual exploration system that enables exploratory analysis of diverse and rich data sets stored in the Earth System Grid Federation (ESGF). DV3D provides user-friendly workflow interfaces for advanced visualization and analysis of climate data at a level appropriate for scientists. The application builds on VTK, an open-source, object-oriented library, for visualization and analysis. DV3D provides the high-level interfaces, tools, and application integrations required to make the analysis and visualization power of VTK readily accessible to users without exposing burdensome details such as actors, cameras, renderers, and transfer functions. It can run as a desktop application or distributed over a set of nodes for hyperwall or distributed visualization applications. DV3D is structured as a set of modules which can be linked to create workflows in Vistrails. Figure 1 displays a typical DV3D workflow as it would appear in the Vistrails workflow builder interface of UV-CDAT and, on the right, the visualization spreadsheet output of the workflow. Each DV3D module encapsulates a complex VTK pipeline with numerous supporting objects. Each visualization module implements a unique interactive 3D display. The integrated Vistrails visualization spreadsheet offers multiple synchronized visualization displays for desktop or hyperwall. The currently available displays include volume renderers, volume slicers, 3D isosurfaces, 3D hovmoller, and various vector plots. The DV3D GUI offers a rich selection of interactive query, browse, navigate, and configure options for all displays. All configuration operations are saved as Vistrails provenance. DV3D's seamless integration with UV-CDAT's climate data management system (CDMS) and other climate data analysis tools provides a wide range of climate data analysis operations, e

  20. A Three Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents

    DTIC Science & Technology

    2006-10-01

    immersion environment with new displays and multi-sensory interaction, using concepts such as deliberate synesthesia , will enhance the ability for...transformations for deliberate synesthesia Deliberate Synesthesia • Sonification • Visification Advanced 3-D visualization and role-playing game (RPG...by exploring concepts such as multi- sensory interaction, dynamic computer-guided focus of attention, deliberate synesthesia , utilization of

  1. The Impact of Co-Presence and Visual Elements in 3D VLEs on Interpersonal Emotional Connection in Telecollaboration

    ERIC Educational Resources Information Center

    Matsui, Hisae

    2014-01-01

    The purpose of this study is to examine participant's perception of the usefulness of the visual elements in 3D Virtual Learning Environments, which represent co-presence, in developing interpersonal emotional connections with their partners in the initial stage of telecollaboration. To fulfill the purpose, two Japanese students and two American…

  2. vPresent: A cloud based 3D virtual presentation environment for interactive product customization

    NASA Astrophysics Data System (ADS)

    Nan, Xiaoming; Guo, Fei; He, Yifeng; Guan, Ling

    2013-09-01

    In modern society, many companies offer product customization services to their customers. There are two major issues in providing customized products. First, product manufacturers need to effectively present their products to the customers who may be located in any geographical area. Second, customers need to be able to provide their feedbacks on the product in real-time. However, the traditional presentation approaches cannot effectively convey sufficient information for the product or efficiently adjust product design according to customers' real-time feedbacks. In order to address these issues, we propose vPresent , a cloud based 3D virtual presentation environment, in this paper. In vPresent, the product expert can show the 3D virtual product to the remote customers and dynamically customize the product based on customers' feedbacks, while customers can provide their opinions in real time when they are viewing a vivid 3D visualization of the product. Since the proposed vPresent is a cloud based system, the customers are able to access the customized virtual products from anywhere at any time, via desktop, laptop, or even smart phone. The proposed vPresent is expected to effectively deliver 3D visual information to customers and provide an interactive design platform for the development of customized products.

  3. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    SciTech Connect

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M; Kettunen, L.

    1995-08-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed.

  4. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  5. Recent improvements in SPE3D: a VR-based surgery planning environment

    NASA Astrophysics Data System (ADS)

    Witkowski, Marcin; Sitnik, Robert; Verdonschot, Nico

    2014-02-01

    SPE3D is a surgery planning environment developed within TLEMsafe project [1] (funded by the European Commission FP7). It enables the operator to plan a surgical procedure on the customized musculoskeletal (MS) model of the patient's lower limbs, send the modified model to the biomechanical analysis module, and export the scenario's parameters to the surgical navigation system. The personalized patient-specific three-dimensional (3-D) MS model is registered with 3-D MRI dataset of lower limbs and the two modalities may be visualized simultaneously. Apart from main planes, any arbitrary MRI cross-section can be rendered on the 3-D MS model in real time. The interface provides tools for: bone cutting, manipulating and removal, repositioning muscle insertion points, modifying muscle force, removing muscles and placing implants stored in the implant library. SPE3D supports stereoscopic viewing as well as natural inspection/manipulation with use of haptic devices. Alternatively, it may be controlled with use of a standard computer keyboard, mouse and 2D display or a touch screen (e.g. in an operating room). The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their operative plans and prepare input data for a surgical navigation system while student or novice surgeons can use it for training.

  6. Visual search in virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.; Ezumi, Koji; Nguyen, Tho; Paul, R.; Tharp, Gregory K.; Yamashita, H. I.

    1992-08-01

    A key task in virtual environments is visual search. To obtain quantitative measures of human performance and documentation of visual search strategies, we have used three experimental arrangements--eye, head, and mouse control of viewing windows--by exploiting various combinations of helmet-mounted-displays, graphics workstations, and eye movement tracking facilities. We contrast two different categories of viewing strategies: one, for 2D pictures with large numbers of targets and clutter scattered randomly; the other for quasi-natural 3D scenes with targets and non-targets placed in realistic, sensible positions. Different searching behaviors emerge from these contrasting search conditions, reflecting different visual and perceptual modes. A regular 'searchpattern' is a systematic, repetitive, idiosyncratic sequence of movements carrying the eye to cover the entire 2D scene. Irregular 'searchpatterns' take advantages of wide windows and the wide human visual lobe; here, hierarchical detection and recognition is performed with the appropriate capabilities of the 'two visual systems'. The 'searchpath', also efficient, repetitive and idiosyncratic, provides only a small set of fixations to check continually the smaller number of targets in the naturalistic 3D scene; likely, searchpaths are driven by top-down spatial models. If the viewed object is known and able to be named, then an hypothesized, top-down cognitive model drives active looking in the 'scanpath' mode, again continually checking important subfeatures of the object. Spatial models for searchpaths may be primitive predecessors, in the evolutionary history of animals, of cognitive models for scanpaths.

  7. 3D shape modeling by integration visual and tactile cues

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2015-10-01

    With the progress in CAD (Computer Aided Design) systems, many mechanical components can be designed efficiently with high precision. But, such a system is unfit for some organic shapes, for example, a toy. In this paper, an easy way to dealing with such shapes is presented, combing visual perception with tangible interaction. The method is divided into three phases: two tangible interaction phases and one visual reconstruction. In the first tangible phase, a clay model is used to represent the raw shape, and the designer can change the shape intuitively with his hands. Then the raw shape is scanned into a digital volume model through a low cost vision system. In the last tangible phase, a desktop haptic device from SensAble is used to refine the scanned volume model and convert it into a surface model. A physical clay model and a virtual clay mode are all used in this method to deal with the main shape and the details respectively, and the vision system is used to bridge the two tangible phases. The vision reconstruction system is only made of a camera to acquire raw shape through shape from silhouettes method. All of the systems are installed on a single desktop, make it convenient for designers. The vision system details and a design example are presented in the papers.

  8. ETeach3D: Designing a 3D Virtual Environment for Evaluating the Digital Competence of Preservice Teachers

    ERIC Educational Resources Information Center

    Esteve-Mon, Francesc M.; Cela-Ranilla, Jose María; Gisbert-Cervera, Mercè

    2016-01-01

    The acquisition of teacher digital competence is a key aspect in the initial training of teachers. However, most existing evaluation instruments do not provide sufficient evidence of this teaching competence. In this study, we describe the design and development process of a three-dimensional (3D) virtual environment for evaluating the teacher…

  9. 3-D AE visualization of bone-cement fatigue locations.

    PubMed

    Qi, G; Pujol, J; Fan, Z

    2000-11-01

    This study addresses the visualization of crack locations in bone-cement material using a three-dimensional acoustic emission source location technique. Computer software based on an earthquake location technique was developed to determine AE source locations and was used to investigate material cracks formed at the tip of a notch in bone cement. The computed locations show that the cracks form linear features with dimensions between 0.1 and 0.2 mm although larger linear features (almost 3.5 mm) also are present. There is a difference of about 2.5 mm between the average of the event locations, and the location of the tip of the notch is 2.5 mm, which may be due to the finite size of the sensors (5 mm in diameter).

  10. Image processing and 3D visualization in forensic pathologic examination

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1996-02-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

  11. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-06-18

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.

  12. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  13. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  14. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    ERIC Educational Resources Information Center

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  15. Sector mapping method for 3D detached retina visualization.

    PubMed

    Zhai, Yi-Ran; Zhao, Yong; Zhong, Jie; Li, Ke; Lu, Cui-Xin; Zhang, Bing

    2016-10-01

    A new sphere-mapping algorithm called sector mapping is introduced to map sector images to the sphere of an eyeball. The proposed sector-mapping algorithm is evaluated and compared with the plane-mapping algorithm adopted in previous work. A simulation that maps an image of concentric circles to the sphere of the eyeball and an analysis of the difference in distance between neighboring points in a plane and sector were used to compare the two mapping algorithms. A three-dimensional model of a whole retina with clear retinal detachment was generated using the Visualization Toolkit software. A comparison of the mapping results shows that the central part of the retina near the optic disc is stretched and its edges are compressed when the plane-mapping algorithm is used. A better mapping result is obtained by the sector-mapping algorithm than by the plane-mapping algorithm in both the simulation results and real clinical retinal detachment three-dimensional reconstruction.

  16. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  17. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  18. A MATLAB function for 3-D and 4-D topographical visualization in geosciences

    NASA Astrophysics Data System (ADS)

    Zekollari, Harry

    2016-04-01

    Combining topographical information and spatially varying variables in visualizations is often crucial and inherent to geoscientific problems. Despite this, it is often an impossible or a very time-consuming and difficult task to create such figures by using classic software packages. This is also the case in the widely used numerical computing environment MATLAB. Here a MATLAB function is introduced for plotting a variety of natural environments with a pronounced topography, such as for instance glaciers, volcanoes and lakes in mountainous regions. Landscapes can be visualized in 3-D, with a single colour defining a featured surface type (e.g. ice, snow, water, lava), or with a colour scale defining the magnitude of a variable (e.g. ice thickness, snow depth, water depth, surface velocity, gradient, elevation). As an input only the elevation of the subsurface (typically the bedrock) and the surface are needed, which can be complemented by various input parameters in order to adapt the figure to specific needs. The figures are particularly suited to make time-evolving animations of natural processes, such as for instance a glacier retreat or a lake drainage event. Several visualization examples will be provided alongside with animations. The function, which is freely available for download, only requires the basic package of MATLAB and can be run on any standard stationary or portable personal computer.

  19. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  20. Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization

    PubMed Central

    Meagher, Kwyn A.; Doblack, Benjamin N.; Ramirez, Mercedes; Davila, Lilian P.

    2014-01-01

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications.  For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately.  To study the effect of local structure on the properties of these complex geometries one must develop realistic models.  To date, software packages are rather limited in creating atomistic helical models.  This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented.  The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix.  With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions.  The second method involves a more robust code which allows flexibility in modeling nanohelical structures.  This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models.  Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created.  An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material.  In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures.  One application of these methods is the recent study of nanohelices

  1. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  2. Visualization Design Environment

    SciTech Connect

    Pomplun, A.R.; Templet, G.J.; Jortner, J.N.; Friesen, J.A.; Schwegel, J.; Hughes, K.R.

    1999-02-01

    Improvements in the performance and capabilities of computer software and hardware system, combined with advances in Internet technologies, have spurred innovative developments in the area of modeling, simulation and visualization. These developments combine to make it possible to create an environment where engineers can design, prototype, analyze, and visualize components in virtual space, saving the time and expenses incurred during numerous design and prototyping iterations. The Visualization Design Centers located at Sandia National Laboratories are facilities built specifically to promote the ''design by team'' concept. This report focuses on designing, developing and deploying this environment by detailing the design of the facility, software infrastructure and hardware systems that comprise this new visualization design environment and describes case studies that document successful application of this environment.

  3. Visualization and 3D Reconstruction of Flame Cells of Taenia solium (Cestoda)

    PubMed Central

    Valverde-Islas, Laura E.; Arrangoiz, Esteban; Vega, Elio; Robert, Lilia; Villanueva, Rafael; Reynoso-Ducoing, Olivia; Willms, Kaethe; Zepeda-Rodríguez, Armando; Fortoul, Teresa I.; Ambrosio, Javier R.

    2011-01-01

    Background Flame cells are the terminal cells of protonephridial systems, which are part of the excretory systems of invertebrates. Although the knowledge of their biological role is incomplete, there is a consensus that these cells perform excretion/secretion activities. It has been suggested that the flame cells participate in the maintenance of the osmotic environment that the cestodes require to live inside their hosts. In live Platyhelminthes, by light microscopy, the cells appear beating their flames rapidly and, at the ultrastructural, the cells have a large body enclosing a tuft of cilia. Few studies have been performed to define the localization of the cytoskeletal proteins of these cells, and it is unclear how these proteins are involved in cell function. Methodology/Principal Findings Parasites of two different developmental stages of T. solium were used: cysticerci recovered from naturally infected pigs and intestinal adults obtained from immunosuppressed and experimentally infected golden hamsters. Hamsters were fed viable cysticerci to recover adult parasites after one month of infection. In the present studies focusing on flame cells of cysticerci tissues was performed. Using several methods such as video, confocal and electron microscopy, in addition to computational analysis for reconstruction and modeling, we have provided a 3D visual rendition of the cytoskeletal architecture of Taenia solium flame cells. Conclusions/Significance We consider that visual representations of cells open a new way for understanding the role of these cells in the excretory systems of Platyhelminths. After reconstruction, the observation of high resolution 3D images allowed for virtual observation of the interior composition of cells. A combination of microscopic images, computational reconstructions and 3D modeling of cells appears to be useful for inferring the cellular dynamics of the flame cell cytoskeleton. PMID:21412407

  4. Enhanced Rgb-D Mapping Method for Detailed 3d Modeling of Large Indoor Environments

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-06-01

    RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method

  5. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    SciTech Connect

    Kerr, J. ); Jones, G.L. )

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting and detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.

  6. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    SciTech Connect

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting and detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.

  7. Does visual fatigue from 3D displays affect autonomic regulation and heart rhythm?

    PubMed

    Park, S; Won, M J; Mun, S; Lee, E C; Whang, M

    2014-02-15

    Most investigations into the negative effects of viewing stereoscopic 3D content on human health have addressed 3D visual fatigue and visually induced motion sickness (VIMS). Very few, however, have looked into changes in autonomic balance and heart rhythm, which are homeostatic factors that ought to be taken into consideration when assessing the overall impact of 3D video viewing on human health. In this study, 30 participants were randomly assigned to two groups: one group watching a 2D video, (2D-group) and the other watching a 3D video (3D-group). The subjects in the 3D-group showed significantly increased heart rates (HR), indicating arousal, and an increased VLF/HF (Very Low Frequency/High Frequency) ratio (a measure of autonomic balance), compared to those in the 2D-group, indicating that autonomic balance was not stable in the 3D-group. Additionally, a more disordered heart rhythm pattern and increasing heart rate (as determined by the R-peak to R-peak (RR) interval) was observed among subjects in the 3D-group compared to subjects in the 2D-group, further indicating that 3D viewing induces lasting activation of the sympathetic nervous system and interrupts autonomic balance.

  8. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    NASA Astrophysics Data System (ADS)

    Babu, Sabarish; Liao, Pao-Chuan; Shin, Min C.; Tsap, Leonid V.

    2006-12-01

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  9. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    SciTech Connect

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  10. Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

    The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

  11. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  12. Intraoperative 3D stereo visualization for image-guided cardiac ablation

    NASA Astrophysics Data System (ADS)

    Azizian, Mahdi; Patel, Rajni

    2011-03-01

    There are commercial products which provide 3D rendered volumes, reconstructed from electro-anatomical mapping and/or pre-operative CT/MR images of a patient's heart with tools for highlighting target locations for cardiac ablation applications. However, it is not possible to update the three-dimensional (3D) volume intraoperatively to provide the interventional cardiologist with more up-to-date feedback at each instant of time. In this paper, we describe the system we have developed for real-time three-dimensional stereo visualization for cardiac ablation. A 4D ultrasound probe is used to acquire and update a 3D image volume. A magnetic tracking device is used to track the distal part of the ablation catheter in real time and a master-slave robot-assisted system is developed for actuation of a steerable catheter. Three-dimensional ultrasound image volumes go through some processing to make the heart tissue and the catheter more visible. The rendered volume is shown in a virtual environment. The catheter can also be added as a virtual tool to this environment to achieve a higher update rate on the catheter's position. The ultrasound probe is also equipped with an EM tracker which is used for online registration of the ultrasound images and the catheter tracking data. The whole augmented reality scene can be shown stereoscopically to enhance depth perception for the user. We have used transthoracic echocardiography (TTE) instead of the conventional transoesophageal (TEE) or intracardiac (ICE) echocardiogram. A beating heart model has been used to perform the experiments. This method can be used both for diagnostic and therapeutic applications as well as training interventional cardiologists.

  13. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays.

    PubMed

    Kim, Chi Jung; Park, Sangin; Won, Myeung Ju; Whang, Mincheol; Lee, Eui Chul

    2013-09-26

    Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.

  14. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  15. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  16. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  17. MAT3D: a virtual reality modeling language environment for the teaching and learning of mathematics.

    PubMed

    Pasqualotti, Adriano; dal Sasso Freitas, Carla Maria

    2002-10-01

    Virtual Reality Modeling Language (VRML) is an independent platform language that allows the creation of nonimmersive virtual environments (VEs) and their use through the Internet. In these VEs, the viewer may navigate and interact with virtual objects, moving around and visualizing them from different angles. Students can benefit from this technology, because it permits them access to objects, which describe the topics covered in their studies in addition to oral and written information. In this work, we investigate the aspects involved in the use of VEs in teaching and learning and propose a conceptual model, called MAT3D, as a learning environment that can be used for the teaching and learning of mathematics. A case study is also presented, in which students use a virtual environment modeled in VRML. Data resulting from this study is analyzed statistically to evaluate the impact of this prototype when applied to the actual teaching and learning of mathematics.

  18. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    system throughput, packet loss, and network congestion as a function of time. This not only gives a better understanding of the network, but it also...only runs on Microsoft Windows, which precludes portability to UNIX-based systems such as Linux or Apple OSX. 3ds Max allows for very extensive scene...simulation. It is often desirable to display this data visually, in order to capitalize on the unique capabilities of the human visual system which

  19. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  20. Comparing and visualizing titanium implant integration in rat bone using 2D and 3D techniques.

    PubMed

    Arvidsson, Anna; Sarve, Hamid; Johansson, Carina B

    2015-01-01

    The aim was to compare the osseointegration of grit-blasted implants with and without a hydrogen fluoride treatment in rat tibia and femur, and to visualize bone formation using state-of-the-art 3D visualization techniques. Grit-blasted implants were inserted in femur and tibia of 10 Sprague-Dawley rats (4 implants/rat). Four weeks after insertion, bone implant samples were retrieved. Selected samples were imaged in 3D using Synchrotron Radiation-based μCT (SRμCT). The 3D data was quantified and visualized using two novel visualization techniques, thread fly-through and 2D unfolding. All samples were processed to cut and ground sections and 2D histomorphometrical comparisons of bone implant contact (BIC), bone area (BA), and mirror image area (MI) were performed. BA values were statistically significantly higher for test implants than controls (p < 0.05), but BIC and MI data did not differ significantly. Thus, the results partly indicate improved bone formation at blasted and hydrogen fluoride treated implants, compared to blasted implants. The 3D analysis was a valuable complement to 2D analysis, facilitating improved visualization. However, further studies are required to evaluate aspects of 3D quantitative techniques, with relation to light microscopy that traditionally is used for osseointegration studies.

  1. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  2. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  3. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  4. [Visualization of the lower cranial nerves by 3D-FIESTA].

    PubMed

    Okumura, Yusuke; Suzuki, Masayuki; Takemura, Akihiro; Tsujii, Hideo; Kawahara, Kazuhiro; Matsuura, Yukihiro; Takada, Tadanori

    2005-02-20

    MR cisternography has been introduced for use in neuroradiology. This method is capable of visualizing tiny structures such as blood vessels and cranial nerves in the cerebrospinal fluid (CSF) space because of its superior contrast resolution. The cranial nerves and small vessels are shown as structures of low intensity surrounded by marked hyperintensity of the CSF. In the present study, we evaluated visualization of the lower cranial nerves (glossopharyngeal, vagus, and accessory) by the three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) sequence and multiplanar reformation (MPR) technique. The subjects were 8 men and 3 women, ranging in age from 21 to 76 years (average, 54 years). We examined the visualization of a total of 66 nerves in 11 subjects by 3D-FIESTA. The results were classified into four categories ranging from good visualization to non-visualization. In all cases, all glossopharyngeal and vagus nerves were identified to some extent, while accessory nerves were visualized either partially or entirely in only 16 cases. The total visualization rate was about 91%. In conclusion, 3D-FIESTA may be a useful method for visualization of the lower cranial nerves.

  5. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  6. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  7. Neurally and ocularly informed graph-based models for searching 3D environments

    NASA Astrophysics Data System (ADS)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  8. Cluster Analysis and Web-Based 3-D Visualization of Large-scale Geophysical Data

    NASA Astrophysics Data System (ADS)

    Kadlec, B. J.; Yuen, D. A.; Bollig, E. F.; Dzwinel, W.; da Silva, C. R.

    2004-05-01

    We present a problem-solving environment WEB-IS (Web-based Data Interrogative System), which we have developed for remote analysis and visualization of geophysical data [Garbow et. al., 2003]. WEB-IS employs agglomerative clustering methods intended for feature extraction and studying the predictions of large magnitude earthquake events. Data-mining is accomplished using a mutual nearest meighbor (MNN) algorithm for extracting event clusters of different density and shapes based on a hierarchical proximity measure. Clustering schemes used in molecular dynamics [Da Silva et. al., 2002] are also considered for increasing computational efficiency using a linked cell algorithm for creating a Verlet neighbor list (VNL) and extracting different cluster structures by applying a canonical backtracking search on the VNL. Space and time correlations between the events are visualized dynamically in 3-D through a filter by showing clusters at different timescales according to defined units of time ranging from days to years. This WEB-IS functionality was tested both on synthetic [Eneva and Ben-Zion, 1997] and actual earthquake catalogs of Japanese earthquakes and can be applied to the soft-computing data mining methods used in hydrology and geoinformatics. Da Silva, C.R.S., Justo, J.F., Fazzio, A., Phys Rev B, vol., 65, 2002. Eneva, M., Ben-Zion, Y.,J. Geophys. Res., 102, 17785-17795, 1997. Garbow, Z.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kadlec, B.J., Vis. Geosci., 2003.

  9. 3D functional ultrasound imaging of the cerebral visual system in rodents.

    PubMed

    Gesnik, Marc; Blaize, Kevin; Deffieux, Thomas; Gennisson, Jean-Luc; Sahel, José-Alain; Fink, Mathias; Picaud, Serge; Tanter, Mickaël

    2017-02-03

    3D functional imaging of the whole brain activity during visual task is a challenging task in rodents due to the complex tri-dimensional shape of involved brain regions and the fine spatial and temporal resolutions required to reveal the visual tract. By coupling functional ultrasound (fUS) imaging with a translational motorized stage and an episodic visual stimulation device, we managed to accurately map and to recover the activity of the visual cortices, the Superior Colliculus (SC) and the Lateral Geniculate Nuclei (LGN) in 3D. Cerebral Blood Volume (CBV) responses during visual stimuli were found to be highly correlated with the visual stimulus time profile in visual cortices (r=0.6), SC (r=0.7) and LGN (r=0.7). These responses were found dependent on flickering frequency and contrast, and optimal stimulus parameters for largest CBV increases were obtained. In particular, increasing the flickering frequency higher than 7Hz revealed a decrease of visual cortices response while the SC response was preserved. Finally, cross-correlation between CBV signals exhibited significant delays (d=0.35s +/-0.1s) between blood volume response in SC and visual cortices in response to our visual stimulus. These results emphasize the interest of fUS imaging as a whole brain neuroimaging modality for brain vision studies in rodent models.

  10. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  11. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    NASA Astrophysics Data System (ADS)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  12. MRI depiction and 3D visualization of three anterior cruciate ligament bundles.

    PubMed

    Otsubo, H; Akatsuka, Y; Takashima, H; Suzuki, T; Suzuki, D; Kamiya, T; Ikeda, Y; Matsumura, T; Yamashita, T; Shino, K

    2017-03-01

    The anterior cruciate ligament (ACL) is divided into three fiber bundles (AM-M: anteromedial-medial, AM-L: anteromedial-lateral, PL: posterolateral). We attempted to depict the three bundles of the human ACL on MRI images and to obtain 3-dimensional visualization of them. Twenty-four knees of healthy volunteers (14 males, 10 females) were scanned by 3T-MRI using the fat suppression 3D coherent oscillatory state acquisition for the manipulation of imaging contrast (FS 3D-COSMIC). The scanned images were reconstructed after the isotropic voxel data, which allows the images to be reconstructed in any plane, was acquired. We conducted statistical examination on the identification rate of the three ACL bundles by 2D planes. Segmentation and 3D visualization of the fiber bundles using volume rendering were performed. The triple-bundle ACL was best depicted in the oblique axial plane. While the AM-M and AM-L bundles were clearly depicted in all cases, the PL bundle was not clearly visualized in two knees (8%). Therefore, the three ACL bundles were depicted in 22 knees (92%). The results of 3D visualization of the fiber arrangement agreed well with macroscopic findings of previous anatomical studies. 3T-MRI and the isotropic voxel data from FS 3D-COSMIC made it possible to demonstrate the identifiable depiction of three ACL bundles in nearly all cases. 3D visualization of the bundles could be a useful tool to understand the ACL fiber arrangement. Clin. Anat. 30:276-283, 2017. 2016 The Authors. Clinical Anatomy published by Wiley Periodicals, Inc. on behalf of American Association of Clinical Anatomists.

  13. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  14. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  15. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  16. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points

  17. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  18. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  19. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  20. Role of Interaction in Enhancing the Epistemic Utility of 3D Mathematical Visualizations

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2010-01-01

    Many epistemic activities, such as spatial reasoning, sense-making, problem solving, and learning, are information-based. In the context of epistemic activities involving mathematical information, learners often use interactive 3D mathematical visualizations (MVs). However, performing such activities is not always easy. Although it is generally…

  1. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  2. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  3. Effects of 3-D Visualization of Groundwater Modeling for Water Resource Decision Making

    NASA Astrophysics Data System (ADS)

    Block, J. L.; Arrowsmith, R.

    2006-12-01

    The rise of 3-D visualization hardware and software technology provides important opportunities to advance scientific and policy research. Although the petroleum industry has used immersive 3-D technology since the early 1990's for the visualization of geologic data among experts, there has been little use of this technology for decision making. The Decision Theater at ASU is a new facility using immersive visualization technology designed to combine scientific research at the university with policy decision making in the community. I document a case study in the use of 3-D immersive technology for water resource management in Arizona. Since the turn of the 20th century, natural hydrologic processes in the greater Phoenix region (Salt River Valley) have been shut down via the construction of dams, canals, wells, water treatment plants, and recharge facilities. Water from rivers that once naturally recharged the groundwater aquifer have thus been diverted while continuing groundwater outflow from wells has drawn the aquifer down hundreds of feet. MODFLOW is used to simulate groundwater response to the different water management decisions which impact the artificial and natural inflow and outflow. The East Valley Water Forum, a partnership of water providers east of Phoenix, used the 3-D capabilities of the Decision Theater to build visualizations of the East Salt River Valley groundwater system based on MODFLOW outputs to aid the design of a regional groundwater management plan. The resulting visualizations are now being integrated into policy decisions about long term water management. I address challenges in visualizing scientific information for policy making and highlight the roles of policy actors, specifically geologists, computer scientists, and political decision makers, involved in designing the visualizations. The results show that policy actors respond differently to the 3-D visualization techniques based on their experience, background, and objectives

  4. 3D surface reconstruction and visualization of the Drosophila wing imaginal disc at cellular resolution

    NASA Astrophysics Data System (ADS)

    Bai, Linge; Widmann, Thomas; Jülicher, Frank; Dahmann, Christian; Breen, David

    2013-01-01

    Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues' apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.

  5. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  6. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  7. Interactive 3D Visualization of the Great Lakes of the World (GLOW) as a Tool to Facilitate Informal Science Education

    NASA Astrophysics Data System (ADS)

    Yikilmaz, M.; Harwood, C. L.; Hsi, S.; Kellogg, L. H.; Kreylos, O.; McDermott, J.; Pellett, B.; Schladow, G.; Segale, H. M.; Yalowitz, S.

    2013-12-01

    Three-dimensional (3D) visualization is a powerful research tool that has been used to investigate complex scientific problems in various fields. It allows researchers to explore and understand processes and features that are not directly observable and help with building of new models. It has been shown that 3D visualization creates a more engaging environment for public audiences. Interactive 3D visualization can allow individuals to explore scientific concepts on their own. We present an NSF funded project developed in collaboration with UC Davis KeckCAVES, UC Davis Tahoe Environmental Research Center, ECHO Lake Aquarium & Science Center, and Lawrence Hall of Science. The Great Lakes of the World (GLOW) project aims to build interactive 3D visualization of some of the major lakes and reservoirs of the world to enhance public awareness and increase understanding and stewardship of freshwater lake ecosystems, habitats, and earth science processes. The project includes a collection of publicly available satellite imagery and digital elevation models at various resolutions for the 20 major lakes of the world as well as the bathymetry data for the 12 lakes. It also includes the vector based 'Global Lakes and Wetlands Database (GLWD)' by the World Wildlife Foundation (WWF) and the Center for Environmental System Research University of Kassel, Germany and the CIA World DataBank II data sets to show wetlands and water reservoirs at global scale. We use a custom virtual globe (Crusta) developed at the UC Davis KeckCAVES. Crusta is designed to specifically allow for visualization and mapping of features in very high spatial resolution (< 1m) and large extent (1000's of km2) raster imagery and topographic data. In addition to imagery, a set of pins, labels and billboards are used to provide textual information about these lakes. Users can interactively learn about the lake and watershed processes as well as geologic processes (e.g. faulting, landslide, glacial, volcanic

  8. Depth cues in human visual perception and their realization in 3D displays

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Häussler, Ralf; Fütterer, Gerald; Leister, Norbert

    2010-04-01

    Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.

  9. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks.

  10. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Data Analysis and Visualization and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  11. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on

  12. Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization.

    PubMed

    Sato, Y; Nakamoto, M; Tamaki, Y; Sasama, T; Sakita, I; Nakajima, Y; Monden, M; Tamura, S

    1998-10-01

    This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data.

  13. GPU-accelerated 3D mipmap for real-time visualization of ultrasound volume data.

    PubMed

    Kwon, Koojoo; Lee, Eun-Seok; Shin, Byeong-Seok

    2013-10-01

    Ultrasound volume rendering is an efficient method for visualizing the shape of fetuses in obstetrics and gynecology. However, in order to obtain high-quality ultrasound volume rendering, noise removal and coordinates conversion are essential prerequisites. Ultrasound data needs to undergo a noise filtering process; otherwise, artifacts and speckle noise cause quality degradation in the final images. Several two-dimensional (2D) noise filtering methods have been used to reduce this noise. However, these 2D filtering methods ignore relevant information in-between adjacent 2D-scanned images. Although three-dimensional (3D) noise filtering methods are used, they require more processing time than 2D-based methods. In addition, the sampling position in the ultrasonic volume rendering process has to be transformed between conical ultrasound coordinates and Cartesian coordinates. We propose a 3D-mipmap-based noise reduction method that uses graphics hardware, as a typical 3D mipmap requires less time to be generated and less storage capacity. In our method, we compare the density values of the corresponding points on consecutive mipmap levels and find the noise area using the difference in the density values. We also provide a noise detector for adaptively selecting the mipmap level using the difference of two mipmap levels. Our method can visualize 3D ultrasound data in real time with 3D noise filtering.

  14. A 3D contact analysis approach for the visualization of the electrical contact asperities

    PubMed Central

    Swingler, Jonathan

    2017-01-01

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a ‘‘3D Contact Map’’ and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approach has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation. PMID:28105383

  15. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool

  16. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  17. MEVA - An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices

    PubMed Central

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data

  18. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  19. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and

  20. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  1. The Monitoring of Urban Environments and Built-Up Structures in a Seismic Area: Web-Based GIS Mapping and 3D Visualization Tools for the Assessment of the Urban Resources

    NASA Astrophysics Data System (ADS)

    Montuori, Antonio; Costanzo, Antonio; Gaudiosi, Iolanda; Vecchio, Antonio; Pannaccione Apa, Maria Ilaria; Gervasi, Anna; Falcone, Sergio; La Piana, Carmelo; Minasi, Mario; Stramondo, Salvatore; Buongiorno, Maria Fabrizia; Doumaz, Fawzi; Musacchio, Massimo; Casula, Giuseppe; Caserta, Arrigo; Speranza, Fabio; Bianchi, Maria Giovanna; Guerra, Ignazio; Porco, Giacinto; Compagnone, Letizia; Cuomo, Massimo; De Marco, Michele

    2016-08-01

    In this paper, a non-invasive infrastructural system called MASSIMO is presented for the monitoring and the seismic vulnerability mitigation of cultural heritages. It integrates ground-based, airborne and space-borne remote sensing tools with geophysical and in situ surveys to provide a multi-spatial (regional, urban and building scales) and multi-temporal (long- term, short-term and near-real-time scales) monitoring of test areas and buildings. The measurements are integrated through web-based Geographic Information System (GIS) and 3-dimensional visual platforms to support decision-making stakeholders involved in urban and structural requalification planning. An application of this system is presented over the Calabria region for the town of Cosenza and a test historical complex.

  2. User Control and Task Authenticity for Spatial Learning in 3D Environments

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Harper, Barry

    2004-01-01

    This paper describes two empirical studies which investigated the importance for spatial learning of view control and object manipulation within 3D environments. A 3D virtual chemistry laboratory was used as the research instrument. Subjects, who were university undergraduate students (34 in the first study and 80 in the second study), undertook…

  3. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  4. 3D Visualization of "Frozen" Dynamic Magma Chambers in the Duluth Complex, Northeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Peterson, D. M.; Hauck, S. A.

    2005-12-01

    The Mesoproterozoic Duluth Complex and associated intrusions of the Midcontinent Rift in northeastern Minnesota constitute one of the largest, semi-continuous, mafic intrusive complexes in the world, second only to the Bushveld Complex of South Africa. These rocks cover an arcuate area of over 5,000 square kilometers and give rise to two strong gravity anomalies (+50 & +70 mgal) that imply intrusive roots to more than 13 km depth. The geometry of three large mafic intrusions within the Duluth Complex have been modeled by the integration of field mapping and drill hole data with maps of gravity and magnetic anomalies. The igneous bodies include the South Kawishiwi, Partridge River, and Bald Eagle intrusions that collectively outcrop over an area of > 800 square kilometers. The South Kawishiwi and Partridge River intrusions host several billion tons of low-grade Cu-Ni-PGE mineralization near their base, while the geophysical expressions of the Bald Eagle intrusion have the same shape and dimensions as the "bulls eye" pattern of low velocity seismic reflection anomalies along the East Pacific Rise. These anomalies are interpreted to define regions of melt concentrations, i.e., active magma chambers. This suggests that the funnel-shaped Bald Eagle intrusion could be an example of a "frozen" dynamic magma chamber. In support of this analogy we note that the magmatic systems of intracontinental rifts, mid-ocean ridges, extensional regimes in back-arc environments, and ophiolites have a common characteristic: the emplacement of magma in extensional environments, and the common products in all four are varieties of layered intrusions, dikes and sills, and overlying volcanic rocks. 3D visualization of these intrusions is integral to the understanding of the Duluth Complex magmatic system and associated mineralization, and can be used as a proxy for study of similar systems, such as the Antarctic Ferrar dolerites, worldwide.

  5. Visualization of hepatic arteries with 3D ultrasound during intra-arterial therapies

    NASA Astrophysics Data System (ADS)

    Gérard, Maxime; Tang, An; Badoual, Anaïs.; Michaud, François; Bigot, Alexandre; Soulez, Gilles; Kadoury, Samuel

    2016-03-01

    Liver cancer represents the second most common cause of cancer-related mortality worldwide. The prognosis is poor with an overall mortality of 95%. Moreover, most hepatic tumors are unresectable due to their advanced stage at discovery or poor underlying liver function. Tumor embolization by intra-arterial approaches is the current standard of care for advanced cases of hepatocellular carcinoma. These therapies rely on the fact that the blood supply of primary hepatic tumors is predominantly arterial. Feedback on blood flow velocities in the hepatic arteries is crucial to ensure maximal treatment efficacy on the targeted masses. Based on these velocities, the intra-arterial injection rate is modulated for optimal infusion of the chemotherapeutic drugs into the tumorous tissue. While Doppler ultrasound is a well-documented technique for the assessment of blood flow, 3D visualization of vascular anatomy with ultrasound remains challenging. In this paper we present an image-guidance pipeline that enables the localization of the hepatic arterial branches within a 3D ultrasound image of the liver. A diagnostic Magnetic resonance angiography (MRA) is first processed to automatically segment the hepatic arteries. A non-rigid registration method is then applied on the portal phase of the MRA volume with a 3D ultrasound to enable the visualization of the 3D mesh of the hepatic arteries in the Doppler images. To evaluate the performance of the proposed workflow, we present initial results from porcine models and patient images.

  6. Smartphone as a Remote Touchpad to Facilitate Visualization of 3D Cerebral Angiograms during Aneurysm Surgery.

    PubMed

    Eftekhar, Behzad

    2017-03-01

    Background During aneurysm surgery, neurosurgeons may need to look at the cerebral angiograms again to better orient themselves to the aneurysm and also the surrounding vascular anatomy. Simplification of the intraoperative imaging review and reduction of the time interval between the view under the microscope and the angiogram review can theoretically improve orientation. Objective To describe the use of a smartphone as a remote touchpad to simplify intraoperative visualization of three-dimensional (3D) cerebral angiograms and reduce the time interval between the view under the microscope and the angiogram review. Methods Anonymized 3D angiograms of the patients in Virtual Reality Modelling Language format are securely uploaded to sketchfab.com, accessible through smartphone Web browsers. A simple software has been developed and made available to facilitate the workflow. The smartphone is connected wirelessly to an external monitor using a Chromecast device and is used intraoperatively as a remote touchpad to view/rotate/zoom the 3D aneurysms angiograms on the external monitor. Results Implementation of the method is practical and helpful for the surgeon in certain cases. It also helps the operating staff, registrars, and students to orient themselves to the surgical anatomy. I present 10 of the uploaded angiograms published online. Conclusion The concept and method of using the smartphone as a remote touchpad to improve intraoperative visualization of 3D cerebral angiograms is described. The implementation is practical, using easily available hardware and software, in most neurosurgical centers worldwide. The method and concept have potential for further development.

  7. System for the Analysis and Visualization of Large 3D Anatomical Trees

    PubMed Central

    Yu, Kun-Chang; Ritman, Erik L.; Higgins, William E.

    2007-01-01

    Modern micro-CT and multi-detector helical CT scanners can produce high-resolution 3D digital images of various anatomical trees. The large size and complexity of these trees make it essentially impossible to define them interactively. Automatic approaches have been proposed for a few specific problems, but none of these approaches guarantee extracting geometrically accurate multi-generational tree structures. This paper proposes an interactive system for defining and visualizing large anatomical trees and for subsequent quantitative data mining. The system consists of a large number of tools for automatic image analysis, semi-automatic and interactive tree editing, and an assortment of visualization tools. Results are presented for a variety of 3D high-resolution images. PMID:17669390

  8. Suitability of online 3D visualization technique in oil palm plantation management

    NASA Astrophysics Data System (ADS)

    Mat, Ruzinoor Che; Nordin, Norani; Zulkifli, Abdul Nasir; Yusof, Shahrul Azmi Mohd

    2016-08-01

    Oil palm industry has been the backbone for the growth of Malaysia economy. The exports of this commodity increasing almost every year. Therefore, there are many studies focusing on how to help this industry increased its productivity. In order to increase the productivity, the management of oil palm plantation need to be improved and strengthen. One of the solution in helping the oil palm manager is by implementing online 3D visualization technique for oil palm plantation using game engine technology. The potential of this application is that it can helps in fertilizer and irrigation management. For this reason, the aim of this paper is to investigate the issues in managing oil palm plantation from the view of oil palm manager by interview. The results from this interview will helps in identifying the suitable issues could be highlight in implementing online 3D visualization technique for oil palm plantation management.

  9. Estimating 3D gaze in physical environment: a geometric approach on consumer-level remote eye tracker

    NASA Astrophysics Data System (ADS)

    Wibirama, Sunu; Mahesa, Rizki R.; Nugroho, Hanung A.; Hamamoto, Kazuhiko

    2017-02-01

    Remote eye trackers with consumer price have been used for various applications on flat computer screen. On the other hand, 3D gaze tracking in physical environment has been useful for visualizing gaze behavior, robots controller, and assistive technology. Instead of using affordable remote eye trackers, 3D gaze tracking in physical environment has been performed using corporate-level head mounted eye trackers, limiting its practical usage to niche user. In this research, we propose a novel method to estimate 3D gaze using consumer-level remote eye tracker. We implement geometric approach to obtain 3D point of gaze from binocular lines-of-sight. Experimental results show that the proposed method yielded low errors of 3.47+/-3.02 cm, 3.02+/-1.34 cm, and 2.57+/-1.85 cm in X, Y , and Z dimensions, respectively. The proposed approach may be used as a starting point for designing interaction method in 3D physical environment.

  10. The Effect of 3D Visual Simulator on Children’s Visual Acuity - A Pilot Study Comparing Two Different Modalities

    PubMed Central

    Ide, Takeshi; Ishikawa, Mariko; Tsubota, Kazuo; Miyao, Masaru

    2013-01-01

    Purpose : To evaluate the efficacy of two non-surgical interventions of vision improvement in children. Methods : A prospective, randomized, pilot study to compare fogging method and the use of head mounted 3D display. Subjects were children, between 5 to 15 years old, with normal best corrected visual acuity (BCVA) and up to -3D myopia. Subjects played a video game as near point work, and received one of the two methods of treatments. Measurements of uncorrected far visual acuity (UCVA), refraction with autorefractometer, and subjective accommodative amplitude were taken 3 times, at the baseline, after the near work, and after the treatment. Results : Both methods applied after near work, improved UCVA. Head mounted 3D display group showed significant improvement in UCVA and resulted in better UCVA than baseline. Fogging group showed improvement in subjective accommodative amplitude. While 3D display group did not show change in the refraction, fogging group’s myopic refraction showed significant increase indicating the eyes showed myopic change of eyes after near work and treatment. Discussion : Despite our lack of clear knowledge in the mechanisms, both methods improved UCVA after the treatments. The improvement in UCVA was not correlated to measured refraction values. Conclusion : UCVA after near work can be improved by repeating near and distant accommodation by fogging and 3D image viewing, although at the different degrees. Further investigation on mechanisms of improvements and their clinical significance are warranted. PMID:24222810

  11. Analysis and 3D visualization of structures of animal brains obtained from histological sections

    NASA Astrophysics Data System (ADS)

    Forero-Vargas, Manuel G.; Fuentes, Veronica; Lopez, D.; Moscoso, A.; Merchan, Miguel A.

    2002-11-01

    This paper presents a new application for the analysis of histological sections and their 3D visualization. The process is performed in few steps. First, a manual process is necessary to determine the regions of interest, including image digitalization, drawing of borders and alignment between all images. Then, a reconstruction process is made. After sampling the contour, the structure of interest is displayed. The application is experimentally validated and some results on histological sections of a rodent's brain (hamster and rat) are shown.

  12. Optimization of site characterization and remediation methods using 3-D geoscience modeling and visualization techniques

    SciTech Connect

    Hedegaard, R.F.; Ho, J.; Eisert, J.

    1996-12-31

    Three-dimensional (3-D) geoscience volume modeling can be used to improve the efficiency of the environmental investigation and remediation process. At several unsaturated zone spill sites at two Superfund (CERCLA) sites (Military Installations) in California, all aspects of subsurface contamination have been characterized using an integrated computerized approach. With the aide of software such as LYNX GMS{trademark}, Wavefront`s Data Visualizer{trademark} and Gstools (public domain), the authors have created a central platform from which to map a contaminant plume, visualize the same plume three-dimensionally, and calculate volumes of contaminated soil or groundwater above important health risk thresholds. The developed methodology allows rapid data inspection for decisions such that the characterization process and remedial action design are optimized. By using the 3-D geoscience modeling and visualization techniques, the technical staff are able to evaluate the completeness and spatial variability of the data and conduct 3-D geostatistical predictions of contaminant and lithologic distributions. The geometry of each plume is estimated using 3-D variography on raw analyte values and indicator thresholds for the kriged model. Three-dimensional lithologic interpretation is based on either {open_quote}linked{close_quote} parallel cross sections or on kriged grid estimations derived from borehole data coded with permeability indicator thresholds. Investigative borings, as well as soil vapor extraction/injection wells, are sighted and excavation costs are estimated using these results. The principal advantages of the technique are the efficiency and rapidity with which meaningful results are obtained and the enhanced visualization capability which is a desirable medium to communicate with both the technical staff as well as nontechnical audiences.

  13. Hierarchical storage and visualization of real-time 3D data

    NASA Astrophysics Data System (ADS)

    Parry, Mitchell; Hannigan, Brendan; Ribarsky, William; Shaw, Christopher D.; Faust, Nickolas L.

    2001-08-01

    In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time

  14. Image processing and 3D visualization in the interpretation of patterned injury of the skin

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1995-09-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.

  15. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  16. Web-Based 3D Technology for Scenario Authoring and Visualization: The Savage Project

    DTIC Science & Technology

    2001-01-01

    Prescod, 2001). XML provides numerous benefits for extensibility and componentization . It is also important to note that XML forms the infrastructure...generating battlespace terrain, to include representation of built-up areas and vegetation cover, for use in the Web3D environment. § Automating

  17. UCVM: An Open Source Software Package for Querying and Visualizing 3D Velocity Models

    NASA Astrophysics Data System (ADS)

    Gill, D.; Small, P.; Maechling, P. J.; Jordan, T. H.; Shaw, J. H.; Plesch, A.; Chen, P.; Lee, E. J.; Taborda, R.; Olsen, K. B.; Callaghan, S.

    2015-12-01

    Three-dimensional (3D) seismic velocity models provide foundational data for ground motion simulations that calculate the propagation of earthquake waves through the Earth. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) package for both Linux and OS X. This unique framework provides a cohesive way for querying and visualizing 3D models. UCVM v14.3.0, supports many Southern California velocity models including CVM-S4, CVM-H 11.9.1, and CVM-S4.26. The last model was derived from 26 full-3D tomographic iterations on CVM-S4. Recently, UCVM has been used to deliver a prototype of a new 3D model of central California (CCA) also based on full-3D tomographic inversions. UCVM was used to provide initial plots of this model and will be used to deliver CCA to users when the model is publicly released. Visualizing models is also possible with UCVM. Integrated within the platform are plotting utilities that can generate 2D cross-sections, horizontal slices, and basin depth maps. UCVM can also export models in NetCDF format for easy import into IDV and ParaView. UCVM has also been prototyped to export models that are compatible with IRIS' new Earth Model Collaboration (EMC) visualization utility. This capability allows for user-specified horizontal slices and cross-sections to be plotted in the same 3D Earth space. UCVM was designed to help a wide variety of researchers. It is currently being use to generate velocity meshes for many SCEC wave propagation codes, including AWP-ODC-SGT and Hercules. It is also used to provide the initial input to SCEC's CyberShake platform. For those interested in specific data points, the software framework makes it easy to extract P and S wave propagation speeds and other material properties from 3D velocity models by providing a common interface through which researchers can query earth models for a given location and depth. Also included in the last release was the ability to add small

  18. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  19. Interactive Visualization of 3-D Mantle Convection Extended Through AJAX Applications

    NASA Astrophysics Data System (ADS)

    McLane, J. C.; Czech, W.; Yuen, D.; Greensky, J.; Knox, M. R.

    2008-12-01

    We have designed a new software system for real-time interactive visualization of results taken directly from large-scale simulations of 3-D mantle convection and other large-scale simulations. This approach allows for intense visualization sessions for a couple of hours as opposed to storing massive amounts of data in a storage system. Our data sets consist of 3-D data for volume rendering with over 10 million unknowns at each timestep. Large scale visualization on a display wall holding around 13 million pixels has already been accomplished with extension to hand-held devices, such as the OQO and Nokia N800 and recently the iPHONE. We are developing web-based software in Java to extend the use of this system across long distances. The software is aimed at creating an interactive and functional application capable of running on multiple browsers by taking advantage of two AJAX-enabled web frameworks: Echo2 and Google Web Toolkit. The software runs in two modes allowing for a user to control an interactive session or observe a session controlled by another user. Modular build of the system allows for components to be swapped out for new components so that other forms of visualization could be accommodated such as Molecular Dynamics in mineral physics or 2-D data sets from lithospheric regional models.

  20. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.

  1. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  2. Open source 3D visualization and interaction dedicated to hydrological models

    NASA Astrophysics Data System (ADS)

    Richard, Julien; Giangola-Murzyn, Agathe; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2014-05-01

    Climate change and surface urbanization strongly modify the hydrological cycle in urban areas, increasing the consequences of extreme events such as floods or draughts. These issues lead to the development of the Multi-Hydro model at the Ecole des Ponts ParisTech (A. Giangola-Murzyn et al., 2012). This fully distributed model allows to compute the hydrological response of urban and peri-urban areas. Unfortunately such models are seldom user friendly. Indeed generating the inputs before launching a new simulation is usually a tricky tasks, and understanding and interpreting the outputs remains specialist tasks not accessible to the wider public. The MH-AssimTool was developed to overcome these issues. To enable an easier and improved understanding of the model outputs, we decided to convert the raw output data (grids file in ascii format) to a 3D display. Some commercial paying models provide a 3D visualization. Because of the cost of their licenses, this kind of tools may not be accessible to the most concerned stakeholders. So, we are developing a new tool based on C++ for the computation, Qt for the graphic user interface, QGIS for the geographical side and OpenGL for the 3D display. All these languages and libraries are open source and multi-platform. We will discuss some preprocessing issues for the data conversion from 2.5D to 3D. Indeed, the GIS data, is considered as a 2.5D (e.i. 2D polygon + one height) and the its transform to 3D display implies a lot of algorithms. For example,to visualize in 3D one building, it is needed to have for each point the coordinates and the elevation according to the topography. Furthermore one have to create new points to represent the walls. Finally the interactions between the model and stakeholders through this new interface and how this helps converting a research tool into a an efficient operational decision tool will be discussed. This ongoing research on the improvement of the visualization methods is supported by the

  3. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.

    2016-10-01

    The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  4. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  5. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing.

    PubMed

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is ±0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient.

  6. Multispectral photon counting integral imaging system for color visualization of photon limited 3D scenes

    NASA Astrophysics Data System (ADS)

    Moon, Inkyu

    2014-06-01

    This paper provides an overview of a colorful photon-counting integral imaging system using Bayer elemental images for 3D visualization of photon limited scenes. The color image sensor with a format of Bayer color filter array, i.e., a red, a green, or a blue filter in a repeating pattern, captures elemental image set of a photon limited three-dimensional (3D) scene. It is assumed that the observed photon count in each channel (red, green or blue) follows Poisson statistics. The reconstruction of 3D scene with a format of Bayer is obtained by applying computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator to the photon-limited Bayer elemental images. Finally, several standard demosaicing algorithms are applied in order to convert the 3D reconstruction with a Bayer format into a RGB per pixel format. Experimental results demonstrate that the gradient corrected linear interpolation technique achieves better performance in regard with acceptable PSNR and less computational complexity.

  7. Steady-State VEP-Based Brain-Computer Interface Control in an Immersive 3D Gaming Environment

    NASA Astrophysics Data System (ADS)

    Lalor, E. C.; Kelly, S. P.; Finucane, C.; Burke, R.; Smith, R.; Reilly, R. B.; McDarby, G.

    2005-12-01

    This paper presents the application of an effective EEG-based brain-computer interface design for binary control in a visually elaborate immersive 3D game. The BCI uses the steady-state visual evoked potential (SSVEP) generated in response to phase-reversing checkerboard patterns. Two power-spectrum estimation methods were employed for feature extraction in a series of offline classification tests. Both methods were also implemented during real-time game play. The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed. For the best performing feature extraction method, the average real-time control accuracy across subjects was 89%. The feasibility of obtaining reliable control in such a visually rich environment using SSVEPs is thus demonstrated and the impact of this result is discussed.

  8. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  9. Autostereoscopic displays for visualization of urban environments

    NASA Astrophysics Data System (ADS)

    Markov, Vladimir B.; Kupiec, Stephen A.; Zakhor, Avideh; Hooper, Darrel; Saini, Gurdial S.

    2006-10-01

    Two approaches in designing autostereoscopic displays capable of providing collaborative viewing of real time 3D scenery will be presented and discussed. Both techniques provide multiscopic "look around" capabilities and are applicable for situation rooms or mobile command centers. In particular, we discuss a prospective use of these displays for interactive visualization of detailed three-dimensional models of urban areas, and the specific demands associated with managing and rendering large volumes of highly detailed information. Latest advances in scanning, survey and registration in urban areas have provided a wealth of detailed three-dimensional data and imagery. Recent events have shown a severe need and demand for systems capable in a high-level 3D visualization upon homeland security posed by terrorist actions and natural disasters within urban areas, as well as for military operations in urban terrain (MOUT). The capacity to visualize sightlines, airflow, flooding, and traffic in real time 3D within dense urban environments is increasingly critical for military and civilian authorities, as well as urban planners and city managers. Development of a high-quality 3D imaging systems is critical also for such areas as medical data imaging, gaming industry, mechanical design and rapid prototyping.

  10. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  11. 3-D prestack Kirchhoff depth migration: From prototype to production in a massively parallel processor environment

    SciTech Connect

    Chang, H.; Solano, M.; VanDyke, J.P.; McMechan, G.A.; Epili, D.

    1998-03-01

    Portable, production-scale 3-D prestack Kirchhoff depth migration software capable of full-volume imaging has been successfully implemented and applied to a six-million trace (46.9 Gbyte) marine data set from a salt/subsalt play in the Gulf of Mexico. Velocity model building and updates use an image-driven strategy and were performed in a Sun Sparc environment. Images obtained by 3-D prestack migration after three velocity iterations are substantially better focused and reveal drilling targets that were not visible in images obtained from conventional 3-D poststack time migration. Amplitudes are well preserved, so anomalies associated with known reservoirs conform to the petrophysical predictions. Prototype development was on an 8-node Intel iPSC860 computer; the production version was run on an 1824-node Intel Paragon computer. The code has been successfully ported to CRAY (T3D) and Unix workstation (PVM) environments.

  12. PointCloudXplore: a visualization tool for 3D gene expressiondata

    SciTech Connect

    Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2006-10-01

    The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.

  13. A simple, fast, and repeatable survey method for underwater visual 3D benthic mapping and monitoring.

    PubMed

    Pizarro, Oscar; Friedman, Ariell; Bryson, Mitch; Williams, Stefan B; Madin, Joshua

    2017-03-01

    Visual 3D reconstruction techniques provide rich ecological and habitat structural information from underwater imagery. However, an unaided swimmer or diver struggles to navigate precisely over larger extents with consistent image overlap needed for visual reconstruction. While underwater robots have demonstrated systematic coverage of areas much larger than the footprint of a single image, access to suitable robotic systems is limited and requires specialized operators. Furthermore, robots are poor at navigating hydrodynamic habitats such as shallow coral reefs. We present a simple approach that constrains the motion of a swimmer using a line unwinding from a fixed central drum. The resulting motion is the involute of a circle, a spiral-like path with constant spacing between revolutions. We test this survey method at a broad range of habitats and hydrodynamic conditions encircling Lizard Island in the Great Barrier Reef, Australia. The approach generates fast, structured, repeatable, and large-extent surveys (~110 m(2) in 15 min) that can be performed with two people and are superior to the commonly used "mow the lawn" method. The amount of image overlap is a design parameter, allowing for surveys that can then be reliably used in an automated processing pipeline to generate 3D reconstructions, orthographically projected mosaics, and structural complexity indices. The individual images or full mosaics can also be labeled for benthic diversity and cover estimates. The survey method we present can serve as a standard approach to repeatedly collecting underwater imagery for high-resolution 2D mosaics and 3D reconstructions covering spatial extents much larger than a single image footprint without requiring sophisticated robotic systems or lengthy deployment of visual guides. As such, it opens up cost-effective novel observations to inform studies relating habitat structure to ecological processes and biodiversity at scales and spatial resolutions not readily

  14. 3D Sound Interactive Environments for Blind Children Problem Solving Skills

    ERIC Educational Resources Information Center

    Sanchez, Jaime; Saenz, Mauricio

    2006-01-01

    Audio-based virtual environments have been increasingly used to foster cognitive and learning skills. A number of studies have also highlighted that the use of technology can help learners to develop effective skills such as motivation and self-esteem. This study presents the design and usability of 3D interactive environments for children with…

  15. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…

  16. The Cognitive Apprenticeship Theory for the Teaching of Mathematics in an Online 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Paraskeva, Fotini

    2013-01-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective.…

  17. TractRender: a new generalized 3D medical image visualization and output platform

    NASA Astrophysics Data System (ADS)

    Hwang, Darryl H.; Tsao, Sinchai; Gajawelli, Niharika; Law, Meng; Lepore, Natasha

    2015-01-01

    Diffusion MRI allows us not only voxelized diffusion characteristics but also the potential to delineate neuronal fiber path through tractography. There is a dearth of flexible open source tractography software programs for visualizing these complicated 3D structures. Moreover, rendering these structures using various shading, lighting, and representations will result in vastly different graphical feel. In addition, the ability to output these objects in various formats increases the utility of this platform. We have created TractRender that leverages openGL features through Matlab, allowing for maximum ease of use but still maintain the flexibility of custom scene rendering.

  18. Real-time 3D reconstruction for collision avoidance in interventional environments.

    PubMed

    Ladikos, Alexander; Benhimane, Selim; Navab, Nassir

    2008-01-01

    With the increased presence of automated devices such as C-arms and medical robots and the introduction of a multitude of surgical tools, navigation systems and patient monitoring devices, collision avoidance has become an issue of practical value in interventional environments. In this paper, we present a real-time 3D reconstruction system for interventional environments which aims at predicting collisions by building a 3D representation of all the objects in the room. The 3D reconstruction is used to determine whether other objects are in the working volume of the device and to alert the medical staff before a collision occurs. In the case of C-arms, this allows faster rotational and angular movement which could for instance be used in 3D angiography to obtain a better reconstruction of contrasted vessels. The system also prevents staff to unknowingly enter the working volume of a device. This is of relevance in complex environments with many devices. The recovered 3D representation also opens the path to many new applications utilizing this data such as workflow analysis, 3D video generation or interventional room planning. To validate our claims, we performed several experiments with a real C-arm that show the validity of the approach. This system is currently being transferred to an interventional room in our university hospital.

  19. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  20. Visualization of high-density 3D graphs using nonlinear visual space transformations

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Garg, Pankaj; Machiraju, Vijay

    2002-03-01

    The real world data distribution is seldom uniform. Clutter and sparsity commonly occur in visualization. Often, clutter results in overplotting, in which certain data items are not visible because other data items occlude them. Sparsity results in the inefficient use of the available display space. Common mechanisms to overcome this include reducing the amount of information displayed or using multiple representations with a varying amount of detail. This paper describes out experiments on Non-Linear Visual Space Transformations (NLVST). NLVST encompasses several innovative techniques: (1) employing a histogram for calculating the density of data distribution; (2) mapping the raw data values to a non-linear scale for stretching a high-density area; (3) tightening the sparse area to save the display space; (4) employing different color ranges of values on a non-linear scale according to the local density. We have applied NLVST to several web applications: market basket analysis, transactions observation, and IT search behavior analysis.

  1. Fast 3D visualization of endogenous brain signals with high-sensitivity laser scanning photothermal microscopy

    PubMed Central

    Miyazaki, Jun; Iida, Tadatsune; Tanaka, Shinji; Hayashi-Takagi, Akiko; Kasai, Haruo; Okabe, Shigeo; Kobayashi, Takayoshi

    2016-01-01

    A fast, high-sensitivity photothermal microscope was developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope. We confirmed a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrated simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 μs. The fluorescence image visualized neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. PMID:27231615

  2. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    PubMed

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  3. In vivo 3D visualization of peripheral circulatory system using linear optoacoustic array

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Brecht, Hans-Peter; Fronheiser, Matthew P.; Nadvoretsky, Vyacheslav; Su, Richard; Conjusteau, Andre; Oraevsky, Alexander A.

    2010-02-01

    In this work we modified light illumination of the laser optoacoustic (OA) imaging system to improve the 3D visualization of human forearm vasculature. The computer modeling demonstrated that the new illumination design that features laser beams converging on the surface of the skin in the imaging plane of the probe provides superior OA images in comparison to the images generated by the illumination with parallel laser beams. We also developed the procedure for vein/artery differentiation based on OA imaging with 690 nm and 1080 nm laser wavelengths. The procedure includes statistical analysis of the intensities of OA images of the neighboring blood vessels. Analysis of the OA images generated by computer simulation of a human forearm illuminated at 690 nm and 1080 nm resulted in successful differentiation of veins and arteries. In vivo scanning of a human forearm provided high contrast 3D OA image of a forearm skin and a superficial blood vessel. The blood vessel image contrast was further enhanced after it was automatically traced using the developed software. The software also allowed evaluation of the effective blood vessel diameter at each step of the scan. We propose that the developed 3D OA imaging system can be used during preoperative mapping of forearm vessels that is essential for hemodialysis treatment.

  4. Scientific rotoscoping: a morphology-based method of 3-D motion analysis and visualization.

    PubMed

    Gatesy, Stephen M; Baier, David B; Jenkins, Farish A; Dial, Kenneth P

    2010-06-01

    Three-dimensional skeletal movement is often impossible to accurately quantify from external markers. X-ray imaging more directly visualizes moving bones, but extracting 3-D kinematic data is notoriously difficult from a single perspective. Stereophotogrammetry is extremely powerful if bi-planar fluoroscopy is available, yet implantation of three radio-opaque markers in each segment of interest may be impractical. Herein we introduce scientific rotoscoping (SR), a new method of motion analysis that uses articulated bone models to simultaneously animate and quantify moving skeletons without markers. The three-step process is described using examples from our work on pigeon flight and alligator walking. First, the experimental scene is reconstructed in 3-D using commercial animation software so that frames of undistorted fluoroscopic and standard video can be viewed in their correct spatial context through calibrated virtual cameras. Second, polygonal models of relevant bones are created from CT or laser scans and rearticulated into a hierarchical marionette controlled by virtual joints. Third, the marionette is registered to video images by adjusting each of its degrees of freedom over a sequence of frames. SR outputs high-resolution 3-D kinematic data for multiple, unmarked bones and anatomically accurate animations that can be rendered from any perspective. Rather than generating moving stick figures abstracted from the coordinates of independent surface points, SR is a morphology-based method of motion analysis deeply rooted in osteological and arthrological data.

  5. On 3D radar data visualization and merging with camera images

    NASA Astrophysics Data System (ADS)

    Kjellgren, J.

    2008-10-01

    The possibilities to support the interpretation of spatial 3D-radar data visually both with and without camera images are studied. Radar measurements and camera pictures of a person are analyzed. First, the received signal amplitudes distributed in three dimensions, spherical range and two angles, are fed to a selection procedure using amplitude and the scene volume of interest. A number of resolution cells will then form images based on a volume representation depending upon the amplitude and location. Projecting the images of all the cells upon an imaging plane then forms the total image. Different images of a radar data set are performed for different projecting planes. The images were studied to find efficient aspect angles to get the target information of most interest. Rotating the target data around a suitable axis may perform such search. In addition, a visualization method for presenting radar data merged with a camera picture has been developed. An aim in this part of the work has been to keep the high information content of the camera image in the merged image. From the 3D-radar measurements the radar data may be projected upon the imaging plane of a camera with an arbitrary viewing center. This possibility is presented in examples with one camera looking at the target scene from the radar location and another camera looking from an aspect angle differing 45° relative to the aspect angle of the radar.

  6. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  7. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    SciTech Connect

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-03-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems.

  8. 3D pattern of brain atrophy in HIV/AIDS visualized using tensor-based morphometry

    PubMed Central

    Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M.

    2011-01-01

    35% of HIV-infected patients have cognitive impairment, but the profile of HIV-induced brain damage is still not well understood. Here we used tensor-based morphometry (TBM) to visualize brain deficits and clinical/anatomical correlations in HIV/AIDS. To perform TBM, we developed a new MRI-based analysis technique that uses fluid image warping, and a new α-entropy-based information-theoretic measure of image correspondence, called the Jensen–Rényi divergence (JRD). Methods 3D T1-weighted brain MRIs of 26 AIDS patients (CDC stage C and/or 3 without HIV-associated dementia; 47.2 ± 9.8 years; 25M/1F; CD4+ T-cell count: 299.5 ± 175.7/µl; log10 plasma viral load: 2.57 ± 1.28 RNA copies/ml) and 14 HIV-seronegative controls (37.6 ± 12.2 years; 8M/6F) were fluidly registered by applying forces throughout each deforming image to maximize the JRD between it and a target image (from a control subject). The 3D fluid registration was regularized using the linearized Cauchy–Navier operator. Fine-scale volumetric differences between diagnostic groups were mapped. Regions were identified where brain atrophy correlated with clinical measures. Results Severe atrophy (~15–20% deficit) was detected bilaterally in the primary and association sensorimotor areas. Atrophy of these regions, particularly in the white matter, correlated with cognitive impairment (P=0.033) and CD4+ T-lymphocyte depletion (P=0.005). Conclusion TBM facilitates 3D visualization of AIDS neuropathology in living patients scanned with MRI. Severe atrophy in frontoparietal and striatal areas may underlie early cognitive dysfunction in AIDS patients, and may signal the imminent onset of AIDS dementia complex. PMID:17035049

  9. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  10. Visualizing Earthquakes in '3D' using the IRIS Earthquake Browser (IEB) Website

    NASA Astrophysics Data System (ADS)

    Welti, R.; McQuillan, P. J.; Weertman, B. R.

    2012-12-01

    The distribution of earthquakes is often easier to interpret in 3D, but most 3D visualization tools require the installation of specialized software and some practice in their use. To reduce this barrier for students and the general public, a pseudo-3D seismicity viewer has been developed which runs in a web browser as part of the IRIS Earthquake Browser (IEB). IEB is an interactive map for viewing earthquake epicenters all over the world, and is composed of a Google map, HTML, JavaScript and a fast earthquake hypocenter web service. The web service accesses seismic data at IRIS from the early 1960s until present. Users can change the region, the number of events, and the depth and magnitude ranges to display. Earthquakes may also be viewed as a table, or exported to various formats. Predefined regions can be selected and zoomed to, and bookmarks generally preserve whatever region and settings are in effect when bookmarked, allowing the easy sharing of particular "scenarios" with other users. Plate boundaries can be added to the display. The 3DV viewer displays events for the currently-selected IEB region in a separate window. They can be rotated and zoomed, with a fast response for plots of up to several thousand events. Rotation can be done manually by dragging or automatically at a set rate, and tectonic plate boundaries turned on or off. 3DV uses a geographical projection algorithm provided by Gary Pavils and collaborators. It is written in HTML5, and is based on CanvasMol by Branislav Ulicny.; A region SE of Fiji, selected in IRIS Earthquake Browser. ; The same region as viewed in 3D Viewer.

  11. Modeling and 3-D Simulation of Biofilm Dynamics in Aqueous Environment

    NASA Astrophysics Data System (ADS)

    Wang, Qi

    2011-11-01

    We present a complex fluid model for biofilms growing in an aqueous environment. The modeling approach represents a new paradigm to develop models for biofilm-environment interaction that can be used to systematically incorporate refined chemical and physiological mechanisms. Special solutions of the model are presented and analyzed. 3-D numerical simulations in aqueous environment with emphasis on biofilm- ambient fluid interaction will be discussed in detail.

  12. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  13. Remote web-based 3D visualization of hydrological forecasting datasets.

    NASA Astrophysics Data System (ADS)

    van Meersbergen, Maarten; Drost, Niels; Blower, Jon; Griffiths, Guy; Hut, Rolf; van de Giesen, Nick

    2015-04-01

    As the possibilities for larger and more detailed simulations of geoscientific data expand, the need for smart solutions in data visualization grow as well. Large volumes of data should be quickly accessible from anywhere in the world without the need for transferring the simulation results. We aim to provide tools for both processing and the handling of these large datasets. As an example, the eWaterCycle project (www.ewatercycle.org) aims to provide a running 14-day ensemble forecast to predict water related stress around the globe. The large volumes of simulation results with uncertainty data that are generated through ensemble hydrological predictions provide a challenge for existing visualization solutions. One possible solution for this challenge lies in the use of web-enabled technology for visualization and analysis of these datasets. Web-based visualization provides an additional benefit in that it eliminates the need for any software installation and configuration and allows for the easy communication of research results between collaborating research parties. Providing interactive tools for the exploration of these datasets will not only help in the analysis of the data by researchers, it can also aid in the dissemination of the research results to the general public. In Vienna, we will present a working open source solution for remote visualization of large volumes of global geospatial data based on the proven open-source 3D web visualization software package Cesium (cesiumjs.org), the ncWMS software package provided by the Reading e-Science Centre and the WebGL and NetCDF standards.

  14. Developing a 3D Game Design Authoring Package to Assist Students' Visualization Process in Design Thinking

    ERIC Educational Resources Information Center

    Kuo, Ming-Shiou; Chuang, Tsung-Yen

    2013-01-01

    The teaching of 3D digital game design requires the development of students' meta-skills, from story creativity to 3D model construction, and even the visualization process in design thinking. The characteristics a good game designer should possess have been identified as including redesign things, creativity thinking and the ability to…

  15. Robot navigation in cluttered 3-D environments using preference-based fuzzy behaviors.

    PubMed

    Shi, Dongqing; Collins, Emmanuel G; Dunlap, Damion

    2007-12-01

    Autonomous navigation systems for mobile robots have been successfully deployed for a wide range of planar ground-based tasks. However, very few counterparts of previous planar navigation systems were developed for 3-D motion, which is needed for both unmanned aerial and underwater vehicles. A novel fuzzy behavioral scheme for navigating an unmanned helicopter in cluttered 3-D spaces is developed. The 3-D navigation problem is decomposed into several identical 2-D navigation subproblems, each of which is solved by using preference-based fuzzy behaviors. Due to the shortcomings of vector summation during the fusion of the 2-D subproblems, instead of directly outputting steering subdirections by their own defuzzification processes, the intermediate preferences of the subproblems are fused to create a 3-D solution region, representing degrees of preference for the robot movement. A new defuzzification algorithm that steers the robot by finding the centroid of a 3-D convex region of maximum volume in the 3-D solution region is developed. A fuzzy speed-control system is also developed to ensure efficient and safe navigation. Substantial simulations have been carried out to demonstrate that the proposed algorithm can smoothly and effectively guide an unmanned helicopter through unknown and cluttered urban and forest environments.

  16. 3D Visualization of Hydrological Model Outputs For a Better Understanding of Multi-Scale Phenomena

    NASA Astrophysics Data System (ADS)

    Richard, J.; Schertzer, D. J. M.; Tchiguirinskaia, I.

    2014-12-01

    During the last decades, many hydrological models has been created to simulate extreme events or scenarios on catchments. The classical outputs of these models are 2D maps, time series or graphs, which are easily understood by scientists, but not so much by many stakeholders, e.g. mayors or local authorities, and the general public. One goal of the Blue Green Dream project is to create outputs that are adequate for them. To reach this goal, we decided to convert most of the model outputs into a unique 3D visualization interface that combines all of them. This conversion has to be performed with an hydrological thinking to keep the information consistent with the context and the raw outputs.We focus our work on the conversion of the outputs of the Multi-Hydro (MH) model, which is physically based, fully distributed and with a GIS data interface. MH splits the urban water cycle into 4 components: the rainfall, the surface runoff, the infiltration and the drainage. To each of them, corresponds a modeling module with specific inputs and outputs. The superimposition of all this information will highlight the model outputs and help to verify the quality of the raw input data. For example, the spatial and the time variability of the rain generated by the rainfall module will be directly visible in 4D (3D + time) before running a full simulation. It is the same with the runoff module: because the result quality depends of the resolution of the rasterized land use, it will confirm or not the choice of the cell size.As most of the inputs and outputs are GIS files, two main conversions will be applied to display the results into 3D. First, a conversion from vector files to 3D objects. For example, buildings are defined in 2D inside a GIS vector file. Each polygon can be extruded with an height to create volumes. The principle is the same for the roads but an intrusion, instead of an extrusion, is done inside the topography file. The second main conversion is the raster

  17. Going Virtual… or Not: Development and Testing of a 3D Virtual Astronomy Environment

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, L.; Speck, A.; Ding, N.; Baldridge, S.; Witzig, S.; Laffey, J.

    2013-04-01

    We present our preliminary results of a pilot study of students' knowledge transfer of an astronomy concept into a new environment. We also share our discoveries on what aspects of a 3D environment students consider being motivational and discouraging for their learning. This study was conducted among 64 non-science major students enrolled in an astronomy laboratory course. During the course, students learned the concept and applications of Kepler's laws using a 2D interactive environment. Later in the semester, the students were placed in a 3D environment in which they were asked to conduct observations and to answers a set of questions pertaining to the Kepler's laws of planetary motion. In this study, we were interested in observing scrutinizing and assessing students' behavior: from choices that they made while creating their avatars (virtual representations) to tools they choose to use, to their navigational patterns, to their levels of discourse in the environment. These helped us to identify what features of the 3D environment our participants found to be helpful and interesting and what tools created unnecessary clutter and distraction. The students' social behavior patterns in the virtual environment together with their answers to the questions helped us to determine how well they understood Kepler's laws, how well they could transfer the concepts to a new situation, and at what point a motivational tool such as a 3D environment becomes a disruption to the constructive learning. Our founding confirmed that students construct deeper knowledge of a concept when they are fully immersed in the environment.

  18. Learning Patterns as Criterion for Forming Work Groups in 3D Simulation Learning Environments

    ERIC Educational Resources Information Center

    Maria Cela-Ranilla, Jose; Molías, Luis Marqués; Cervera, Mercè Gisbert

    2016-01-01

    This study analyzes the relationship between the use of learning patterns as a grouping criterion to develop learning activities in the 3D simulation environment at University. Participants included 72 Spanish students from the Education and Marketing disciplines. Descriptive statistics and non-parametric tests were conducted. The process was…

  19. Learning to Collaborate: Designing Collaboration in a 3-D Game Environment

    ERIC Educational Resources Information Center

    Hamalainen, Raija; Manninen, Tony; Jarvela, Sanna; Hakkinen, Paivi

    2006-01-01

    To respond to learning needs, Computer-Supported Collaborative Learning (CSCL) must provide instructional support. The particular focus of this paper is on designing collaboration in a 3-D virtual game environment intended to make learning more effective by promoting student opportunities for interaction. The empirical experiment eScape, which…

  20. Best Practices for Designing Online Learning Environments for 3D Modeling Curricula: A Delphi Study

    ERIC Educational Resources Information Center

    Mapson, Kathleen Harrell

    2011-01-01

    The purpose of this study was to develop an inventory of best practices for designing online learning environments for 3D modeling curricula. Due to the instructional complexity of three-dimensional modeling, few have sought to develop this type of course for online teaching and learning. Considering this, the study aimed to collectively aggregate…

  1. Physical Environment as a 3-D Textbook: Design and Development of a Prototype

    ERIC Educational Resources Information Center

    Kong, Seng Yeap; Yaacob, Naziaty Mohd; Ariffin, Ati Rosemary Mohd

    2015-01-01

    The use of the physical environment as a three-dimensional (3-D) textbook is not a common practice in educational facilities design. Previous researches documented that little progress has been made to incorporate environmental education (EE) into architecture, especially among the conventional designers who are often constrained by the budget and…

  2. Generation of a tumor spheroid in a microgravity environment as a 3D model of melanoma.

    PubMed

    Marrero, Bernadette; Messina, Jane L; Heller, Richard

    2009-10-01

    An in vitro 3D model was developed utilizing a synthetic microgravity environment to facilitate studying the cell interactions. 2D monolayer cell culture models have been successfully used to understand various cellular reactions that occur in vivo. There are some limitations to the 2D model that are apparent when compared to cells grown in a 3D matrix. For example, some proteins that are not expressed in a 2D model are found up-regulated in the 3D matrix. In this paper, we discuss techniques used to develop the first known large, free-floating 3D tissue model used to establish tumor spheroids. The bioreactor system known as the High Aspect Ratio Vessel (HARVs) was used to provide a microgravity environment. The HARVs promoted aggregation of keratinocytes (HaCaT) that formed a construct that served as scaffolding for the growth of mouse melanoma. Although there is an emphasis on building a 3D model with the proper extracellular matrix and stroma, we were able to develop a model that excluded the use of matrigel. Immunohistochemistry and apoptosis assays provided evidence that this 3D model supports B16.F10 cell growth, proliferation, and synthesis of extracellular matrix. Immunofluorescence showed that melanoma cells interact with one another displaying observable cellular morphological changes. The goal of engineering a 3D tissue model is to collect new information about cancer development and develop new potential treatment regimens that can be translated to in vivo models while reducing the use of laboratory animals.

  3. 3D Online Visualization and Synergy of NASA A-Train Data using Google Earth

    NASA Astrophysics Data System (ADS)

    Chen, A.; Kempler, S. J.; Leptoukh, G. G.; Smith, P. M.

    2010-12-01

    Google Earth provides a convenient virtual 3D platform for organizing, visualizing, publishing, and synergizing Earth science data. This kind of platform is increasingly playing an important role in scientific research that involves geospatial data. NASA Goddard Earth Science (GES) Data and Information Service Center (DISC) has had a dedicated Google Earth-based scientific research program for several years. We have implemented numerous tools for a) visualizing two-, three- and four-dimensional Earth science data on Google Earth; b) visualizing and synergizing analyzed results derived from GES DISC’s online analysis system; and c) visualizing results derived from other standard web services (e.g. OGC WMS). All those implementations produce KMZ files that can be opened via Google Earth client and Earth science data are visualized on Google Earth. Google Earth can be used as both a client and a web browser plug-in. Currently, Google Earth’s plug-in in web browser is integrated with GES DISC’s online analysis system as a virtual three dimensional platform to facilitate three-dimensional online interactive data analysis and results visualization. Multiple Google Earth windows are available in one browser window for users visualizing, comparing and synergizing online Earth science data. By utilizing the available GES DISC’s online system, users can interactively select and refine their data products of interest and then generate downloadable KMZ files. These KMZ files are automatically opened in the user’s Google Earth client. Google Earth is used to overlay and manipulate the contained data layers thus providing the ability of data synergy and the inter-comparison and analysis of a wide variety of online scientific measurements. We illustrate our system design and implementation and demonstrate our operation system here. The work at GES DISC allows greater integration between online scientific data analysis systems and three-dimensional visualization, and

  4. Visual learning in multisensory environments.

    PubMed

    Jacobs, Robert A; Shams, Ladan

    2010-04-01

    We study the claim that multisensory environments are useful for visual learning because nonvisual percepts can be processed to produce error signals that people can use to adapt their visual systems. This hypothesis is motivated by a Bayesian network framework. The framework is useful because it ties together three observations that have appeared in the literature: (a) signals from nonvisual modalities can "teach" the visual system; (b) signals from nonvisual modalities can facilitate learning in the visual system; and (c) visual signals can become associated with (or be predicted by) signals from nonvisual modalities. Experimental data consistent with each of these observations are reviewed.

  5. Integration of camera and range sensors for 3D pose estimation in robot visual servoing

    NASA Astrophysics Data System (ADS)

    Hulls, Carol C. W.; Wilson, William J.

    1998-10-01

    Range-vision sensor systems can incorporate range images or single point measurements. Research incorporating point range measurements has focused on the area of map generation for mobile robots. These systems can utilize the fact that the objects sensed tend to be large and planar. The approach presented in this paper fuses information obtained from a point range measurement with visual information to produce estimates of the relative 3D position and orientation of a small, non-planar object with respect to a robot end- effector. The paper describes a real-time sensor fusion system for performing dynamic visual servoing using a camera and a point laser range sensor. The system is based upon the object model reference approach. This approach, which can be used to develop multi-sensor fusion systems that fuse dynamic sensor data from diverse sensors in real-time, uses a description of the object to be sensed in order to develop a combined observation-dependency sensor model. The range- vision sensor system is evaluated in terms of accuracy and robustness. The results show that the use of a range sensor significantly improves the system performance when there is poor or insufficient camera information. The system developed is suitable for visual servoing applications, particularly robot assembly operations.

  6. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  7. Characteristics of tumor and host cells in 3-D simulated microgravity environment

    NASA Astrophysics Data System (ADS)

    Chopra, V.; Dinh, T.; Wood, T.; Pellis, N.; Hannigan, E.

    Co-cultures of three-dimensional (3-D) constructs of one cell type with dispersed cells of a second cell type in low-shear rotating suspension cultures in simulated microgravity environment have been used to investigate invasive properties of normal and malignant cell types. We have shown that the epithelial and endothelial cells undergo a switch in characteristics when grown in an in vitro 3-D environment, that mimics the in vivo host environment as compared with conventional two-dimensional (2-D) monolayer cultures. Histological preparations and immunohistochemical staining procedures of cocultured harvests demonstrated various markers of interest: like collagen vimentin, mucin, elastin, fibrin, fibrinogen, cytokeratin, adhesion molecules and various angiogenic factors by tumor cells from gynecological cancer patients along with fibroblasts, endothelial cells and patient-derived mononuclear cells (n=8). The growth rate was enhanced 10-15 folds by 3-D cocultures of patient-derived cells as compared with 2-D monolayer cultures and 3-D monocultures. The production of interleukin-2, interleukin-6, interleukin -8, vascular endothelial cell growth factor, basic fibroblast growth factor, and angiogenin was studied by using ELISA and RT- PCR. Human umbilical vein-derived endothelial cell (HUVEC) were used to study the mitogenic response of the conditioned medium collected from 3-D monocultures and cocultures during proliferation and migration assays. The conditioned medium collected from 3-D cocultures of cancer cells also 1) increased the expression of message levels of vascular endothelial growth factor and its receptor flt-1 and KDR was observed by HUVEC, and 2) increased the expression of intracellular and vascular cell adhesion molecules on the surface of HUVEC, when measured by using Live cell ELISA assays and immunofluorescent staining as compared with 3-D monocultures of normal epithelial cells. There was an increase in production of 1) enzymatic activity that

  8. 3D-VAS--initial results from computerized visualization of dynamic occlusion.

    PubMed

    Ruge, S; Kordass, B

    2008-01-01

    Visualization of the dynamic occlusion is one of the central tasks in both clinical dentistry and dental engineering. Many aspects of dynamic occlusion, such as the interocclusal function in the posterior region, cannot be seen directly clinically and at best can be recorded with contact paper. Therefore, analyses of the dynamic occlusion using mounted models in the articulator are unavoidable in many cases for reproduction of dynamic occlusion. However, the reproduction of dynamic occlusion in the mechanical articulator has clear restrictions inherent to the process, but also caused by biological variability. Virtual articulators can expediently supplement mechanical articulators, since with them it is possible to display in relation to time unusual and extraordinary perspectives, such as sectional images and flowing, sliding contact points. One of the latest developments in the field of virtual articulation is the 3D virtual articulation system module of the Zebris company, D-Isny. By means of a specially developed coupling tray, 3D-scanned rows of teeth can be matched with computerized motion recordings of mandibular function. The software displays the movements of the 3D-scanned rows of teeth not only with jaw motion but also with chewing motion--therefore movements under chewing pressure--in real time and facilitates special analytical methods transcending mechanical occlusion analysis in conventional articulators: This includes displays of the strength of the contact points and surfaces, the occurrence of the contact points in relation to time, sectional images of the dentition, analyses of the interocclusal gap in the occlusal region, etc. This software and its possibilities are described and explained by reference to individual cases.

  9. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  10. Reconstruction and Visualization of Coordinated 3D Cell Migration Based on Optical Flow.

    PubMed

    Kappe, Christopher P; Schütz, Lucas; Gunther, Stefan; Hufnagel, Lars; Lemke, Steffen; Leitte, Heike

    2016-01-01

    Animal development is marked by the repeated reorganization of cells and cell populations, which ultimately determine form and shape of the growing organism. One of the central questions in developmental biology is to understand precisely how cells reorganize, as well as how and to what extent this reorganization is coordinated. While modern microscopes can record video data for every cell during animal development in 3D+t, analyzing these videos remains a major challenge: reconstruction of comprehensive cell tracks turned out to be very demanding especially with decreasing data quality and increasing cell densities. In this paper, we present an analysis pipeline for coordinated cellular motions in developing embryos based on the optical flow of a series of 3D images. We use numerical integration to reconstruct cellular long-term motions in the optical flow of the video, we take care of data validation, and we derive a LIC-based, dense flow visualization for the resulting pathlines. This approach allows us to handle low video quality such as noisy data or poorly separated cells, and it allows the biologists to get a comprehensive understanding of their data by capturing dynamic growth processes in stills. We validate our methods using three videos of growing fruit fly embryos.

  11. Monitoring the solid-liquid interface in tanks using profiling sonar and 3D visualization techniques

    NASA Astrophysics Data System (ADS)

    Sood, Nitin; Zhang, Jinsong; Roelant, David; Srivastava, Rajiv

    2005-03-01

    Visualization of the interface between settled solids and the optically opaque liquid above is necessary to facilitate efficient retrieval of the high-level radioactive waste (HLW) from underground storage tanks. A profiling sonar was used to generate 2-D slices across the settled solids at the bottom of the tank. By incrementally rotating the sonar about its centerline, slices of the solid-liquid interface can be imaged and a 3-D image of the settled solids interface generated. To demonstrate the efficacy of the sonar in real-time solid-liquid interface monitoring systems inside HLW tanks, two sets of experiments were performed. First, various solid objects and kaolin clay (10 μm dia) were successfully imaged while agitating with 30% solids (by weight) entrained in the liquid. Second, a solid with a density similar to that of the immersed fluid density was successfully imaged. Two dimensional (2-D) sonar images and the accuracy and limitations of the in-tank imaging will be presented for these two experiments. In addition, a brief review of how to utilize a 2-D sonar image to generate a 3-D surface of the settled layer within a tank will be discussed.

  12. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  13. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.

  14. CheS-Mapper - Chemical Space Mapping and Visualization in 3D

    PubMed Central

    2012-01-01

    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis. PMID:22424447

  15. Interactive toothbrushing education by a smart toothbrush system via 3D visualization.

    PubMed

    Kim, Kyeong-Seop; Yoon, Tae-Ho; Lee, Jeong-Whan; Kim, Dong-Jun

    2009-11-01

    The very first step for keeping good dental hygiene is to employ the correct toothbrushing style. Due to the possible occurrence of periodontal disease at an early age, it is critical to begin correct toothbrushing patterns as early as possible. With this aim, we proposed a novel toothbrush monitoring and training system to interactively educate on toothbrushing behavior in terms of the correct brushing motion and grip axis orientation. Our intelligent toothbrush monitoring system first senses a user's brushing pattern by analyzing the waveforms acquired from a built-in accelerometer and magnetic sensor. To discern the inappropriate toothbrushing style, a real-time interactive three dimensional display system, based on an OpenGL 3D surface rendering scheme, is applied to visualize a subject's brushing patterns and subsequently advise on the correct brushing method.

  16. The research of 3D visualization techniques for the test of laser energy distribution

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Wang, Bo

    2013-07-01

    In the process of laser transmission in the atmosphere, the complexity and instability of the atmospheric composition that seriously interfere with, even change, the performance of the laser beam. The image of laser energy distribution can be captured and analyzed through infrared CCD and digital image processing technology. The basic features of laser energy density distribution, such as the location and power of the peak point and other basic parameters could be acquired; laser energy density distribution can display in real time continuous multi-frame; the 3D visualization of pseudo-color for laser energy density distribution could be displayed, that reflect the relative size and position of the energy distribution in the different regions of the laser spot, using the VC++, windows APIs and OpenGL programming. The laser energy density distribution can be observed from all angles.

  17. Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.

    PubMed

    Cao, Xiang; Zhu, Daqi; Yang, Simon X

    2016-11-01

    Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.

  18. Thoracic cavity definition for 3D PET/CT analysis and visualization.

    PubMed

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W; Higgins, William E

    2015-07-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical details on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage=99.2% and leakage=0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment.

  19. An AR system with intuitive user interface for manipulation and visualization of 3D medical data.

    PubMed

    Vogt, Sebastian; Khamene, Ali; Niemann, Heinrich; Sauer, Frank

    2004-01-01

    We report on a stereoscopic video-see-through augmented reality system which we developed for medical applications. Our system allows interactive in-situ visualization of 3D medical imaging data. For high-quality rendering of the augmented scene we utilize the capabilities of the latest graphics card generations. Fast high-precision MPR generation ("multiplanar reconstruction") and volume rendering is realized with OpenGL 3D textures. We provide a tracked hand-held tool to interact with the medical imaging data in its actual location. This tool is represented as a virtual tool in the space of the medical data. The user can assign different functionality to it: select arbitrary MPR cross-sections, guide a local volume rendered cube through the medical data, change the transfer function, etc. Tracking works in conjunction with retroreflective markers, which frame the workspace for head tracking respectively are attached to instruments for tool tracking. We use a single head-mounted tracking camera, which is rigidly fixed to the stereo pair of cameras that provide the live video view of the real scene. The user's spatial perception is based on stereo depth cues as well as on the kinetic depth cues that he receives with the viewpoint variations and the interactive data visualization. The AR system has a compelling real-time performance with 30 stereo-frames/second and exhibits no time lag between the video images and the augmenting graphics. Thus, the physician can interactively explore the medical imaging information in-situ.

  20. Thoracic Cavity Definition for 3D PET/CT Analysis and Visualization

    PubMed Central

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W.; Higgins, William E.

    2015-01-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical detail on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage = 99.2% and leakage = 0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. PMID:25957746

  1. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies.

  2. Webs on the Web (WOW): 3D visualization of ecological networks on the WWW for collaborative research and education

    NASA Astrophysics Data System (ADS)

    Yoon, Ilmi; Williams, Rich; Levine, Eli; Yoon, Sanghyuk; Dunne, Jennifer; Martinez, Neo

    2004-06-01

    This paper describes information technology being developed to improve the quality, sophistication, accessibility, and pedagogical simplicity of ecological network data, analysis, and visualization. We present designs for a WWW demonstration/prototype web site that provides database, analysis, and visualization tools for research and education related to food web research. Our early experience with a prototype 3D ecological network visualization guides our design of a more flexible architecture design. 3D visualization algorithms include variable node and link sizes, placements according to node connectivity and tropic levels, and visualization of other node and link properties in food web data. The flexible architecture includes an XML application design, FoodWebML, and pipelining of computational components. Based on users" choices of data and visualization options, the WWW prototype site will connect to an XML database (Xindice) and return the visualization in VRML format for browsing and further interactions.

  3. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  4. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool.

    PubMed

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2009-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data.

  5. Using Computer-Aided Design Software and 3D Printers to Improve Spatial Visualization

    ERIC Educational Resources Information Center

    Katsio-Loudis, Petros; Jones, Millie

    2015-01-01

    Many articles have been published on the use of 3D printing technology. From prefabricated homes and outdoor structures to human organs, 3D printing technology has found a niche in many fields, but especially education. With the introduction of AutoCAD technical drawing programs and now 3D printing, learners can use 3D printed models to develop…

  6. Photographing Internal Fractures of the Archaeological Statues with 3D Visualization of Ground Penetrating Radar Data

    NASA Astrophysics Data System (ADS)

    Kadioglu, S.; Kadioglu, Y. K.

    2009-04-01

    PHOTOGRAPHING INTERNAL FRACTURES OF THE ARCHAEOLOGICAL STATUES WITH 3D VISUALIZATION OF GROUND PENETRATING RADAR DATA Selma KADIOGLU1 and Yusuf K. KADIOGLU2 1Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr 2Ankara University, Faculty of Engineering, Department of Geological Engineering, 06100 Tandogan/ANKARA-TURKEY kadi@eng.ankara.edu.tr The aim of the study is to illustrate a new approach to image the discontinuities in the archaeological statues before restoration studies using ground penetrating radar (GPR) method. The method was successfully applied to detect and map the fractures and cavities of the two monument groups and lion statues in Mustafa Kemal ATATURK's tumb (ANITKABIR) in Ankara-Turkey. The tumb, which has been started to build in 1944 and completed in 1953, represents Turkish people and Ataturk, who is founder of the Republic of Turkey. Therefore this monument is very important for Turkish people. The monument groups and lion statues have been built from travertine rocks. These travertine have vesicular textures with the percent of 12. They have been mainly composed of calcite, aragonite with rare amount of plant relict and clay minerals. The concentrations of Fe, Mg, Cl and Mn may lead to verify their colours changing from white through pale green to beige. The atmospheric contamination of Ankara has been caused to cover some parts of the surface of these travertine with a thin film of Pb as blackish in colour. The micro fractures have been observed specially at the rim of the vesicular of the rocks by the polarizing microscope. Parallel two dimensional (2D) GPR profile data with 10cm profile space were acquired by RAMAC CU II system with 1600 MHz shielded antenna on the monument groups (three women, three men and 24 lion statues) and then a three dimensional (3D) data volume were built using parallel 2D GPR data. Air-filled fractures and cavities in the

  7. A Bio-Inspired Approach to Task Assignment of Swarm Robots in 3-D Dynamic Environments.

    PubMed

    Yi, Xin; Zhu, Anmin; Yang, Simon X; Luo, Chaomin

    2016-03-15

    Intending to mimic the operating mechanism of biological neural systems, a self organizing map-based approach to task assignment of a swarm of robots in 3-D dynamic environments is proposed in this paper. This approach integrates the advantages and characteristics of biological neural systems. It is capable of dynamically planning the paths of a swarm of robots in 3-D environments under uncertain situations, such as when some robots are presented in or broken down or when more than one robot is needed for some special task locations. A Bezier path optimizing algorithm and a parameter adjusting algorithm are integrated in this paper. It is capable of reducing the complexity of the robot navigation control and limiting the number of convergence iterations. The simulation results with different environments demonstrate the effectiveness of the proposed approach.

  8. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  9. 3D Visualization of Sheath Folds in Roman Marble from Ephesus, Turkey

    NASA Astrophysics Data System (ADS)

    Wex, Sebastian; Passchier, Cornelis W.; de Kemp, Eric A.; Ilhan, Sinan

    2013-04-01

    Excavation of a palatial 2nd century AD house (Terrace House Two) in the ancient city of Ephesus, Turkey in the 1970s produced 10.313 pieces of colored, folded marble which belonged to 54 marble plates of 1.6 cm thickness that originally covered the walls of the banquet hall of the house. The marble plates were completely reassembled and restored by a team of workers over the last 6 years. The plates were recognized as having been sawn from two separate large blocks of "Cipollino verde", a green mylonitized marble from Karystos on the Island of Euboea, Greece. After restoration, it became clear that all slabs had been placed on the wall in approximately the sequence in which they had been cut off by a Roman stone saw. As a result, the marble plates give a full 3D insight in the folded internal structure of 1m3 block of mylonite. The restoration of the slabs was recognized as a first, unique opportunity for detailed reconstruction of the 3D geometry of m-scale folds in mylonitized marble. Photographs were taken of each slab and used to reconstruct their exact arrangement within the originally quarried blocks. Outlines of layers were digitized and a full 3D reconstruction of the internal structure of the block was created using ArcMap and GOCAD. Fold structures in the block include curtain folds and multilayered sheath folds. Several different layers showing these structures were digitized on the photographs of the slab surfaces and virtually mounted back together within the model of the marble block. Due to the serial sectioning into slabs, with cm-scale spacing, the visualization of the 3D geometry of sheath folds was accomplished with a resolution better than 4 cm. Final assembled 3D images reveal how sheath folds emerge from continuous layers and show their overall consistency as well as a constant hinge line orientation of the fold structures. Observations suggest that a single deformation phase was responsible for the evolution of "Cipollino verde" structures

  10. In situ visualization of magma deformation at high temperature using time-lapse 3D tomography

    NASA Astrophysics Data System (ADS)

    Godinho, jose; Lee, Peter; Lavallee, Yan; Kendrick, Jackie; Von-Aulock, Felix

    2016-04-01

    We use synchrotron based x-ray computed micro-tomography (sCT) to visualize, in situ, the microstructural evolution of magma samples 3 mm diameter with a resolution of 3 μm during heating and uniaxial compression at temperatures up to 1040 °C. The interaction between crystals, melt and gas bubbles is analysed in 4D (3D + time) during sample deformation. The ability to observe the changes of the microstructure as a function of time allow us to: a) study the effect of temperature in the ability of magma to fracture or deform; b) quantify bubble nucleation and growth rates during heating; c) study the relation between crystal displacement and volatile exsolution. We will show unique beautiful videos of how bubbles grow and coalescence, how samples and crystals within the sample fracture, heal and deform. Our study establishes in situ sCT as a powerful tool to quantify and visualize with micro-scale resolution fast processes taking place in magma that are essential to understand ascent in a volcanic conduit and validate existing models for determining the explosivity of volcanic eruptions. Tracking simultaneously the time and spatial changes of magma microstructures is shown to be primordial to study disequilibrium processes between crystals, melt and gas phases.

  11. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  12. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes.

  13. An Examination of the Effects of Collaborative Scientific Visualization via Model-based Reasoning on Science, Technology, Engineering, and Mathematics (STEM) Learning Within an Immersive 3D World

    NASA Astrophysics Data System (ADS)

    Soleimani, Ali

    Immersive 3D worlds can be designed to effectively engage students in peer-to-peer collaborative learning activities, supported by scientific visualization, to help with understanding complex concepts associated with learning science, technology, engineering, and mathematics (STEM). Previous research studies have shown STEM learning benefits associated with the use of scientific visualization tools involving model-based reasoning (MBR). Little is known, however, about collaborative use of scientific visualization, via MBR, within an immersive 3D-world learning environment for helping to improve perceived value of STEM learning and knowledge acquisition in a targeted domain such as geothermal energy. Geothermal energy was selected as the study's STEM focus, because understanding in the domain is highly dependent on successfully integrating science and mathematics concepts. This study used a 2x2 Mixed ANOVA, with repeated measures, design to analyze collaborative usage of a geothermal energy MBR model and its effects on learning within an immersive 3D world. The immersive 3D world used for the study is supported by the Open Simulator platform. Findings from this study can suggest ways to improve STEM learning and inform the design of MBR activities when conducted within an immersive 3D world.

  14. Recording, Visualization and Documentation of 3D Spatial Data for Monitoring Topography in Areas of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Maravelakis, Emmanouel; Konstantaras, Antonios; Axaridou, Anastasia; Chrysakis, Ioannis; Xinogalos, Michalis

    2014-05-01

    . allowing them to interchange their knowledge, findings and observations at different time frames. Results outline the successful application of the above systems in certain Greek areas of important cultural heritage [3,11] were significant efforts are being made for their preservation through time. Acknowledgement The authors wish to thank the General Secretariat for Research and Technology of Ministry of Education and Religious Affairs, Culture and Sports in Greece for their financial support via program Cooperation: Partnership of Production and Research Institutions in Small and Medium Scale Projects, Project Title: "3D-SYSTEK - Development of a novel system for 3D Documentation, Promotion and Exploitation of Cultural Heritage Monuments via 3D data acquisition, 3D modeling and metadata recording". Keywords spatial data, land degradation monitoring, 3D modeling and visualization, terrestrial laser scanning, documentation and metadata repository, protection of cultural heritage References [1] Shalaby, A., and Tateishi, R.: Remote sensing and GIS for mapping and monitoring land cover and land-use changes in the northwestern coastal zone of egypt. Applied Geography, 27(1), 28-41, (2007) [2] Poesen, J. W. A., and Hooke, J. M.: Erosion, flooding and channel management in mediterranean environments of southern europe. Progress in Physical Geography, 21(2), 157-199, (1997) [3] Maravelakis, E., Bilalis, N., Mantzorou, I., Konstantaras, A., Antoniadis, A.: 3D modeling of the oldest olive tree of the world. IJCER 2(2), 340-347 (2012) [4] Manferdini, A.M., Remondino, F.: Reality-Based 3D Modeling, Segmentation and Web- Based Visualization. In: Ioannides, M., Fellner, D., Georgopoulos, A., Hadjimitsis, D.G. (eds.) EuroMed 2010. LNCS, vol. 6436, pp. 110-124. Springer, Heidelberg (2010) [5] Tapete, D., Casagli, N., Luzi, G., Fanti, R., Gigli, G., Leva, D.: Integrating radar and laserbased remote sensing techniques for monitoring structural deformation of archaeological monuments

  15. Micro-CT images reconstruction and 3D visualization for small animal studying

    NASA Astrophysics Data System (ADS)

    Gong, Hui; Liu, Qian; Zhong, Aijun; Ju, Shan; Fang, Quan; Fang, Zheng

    2005-01-01

    A small-animal x-ray micro computed tomography (micro-CT) system has been constructed to screen laboratory small animals and organs. The micro-CT system consists of dual fiber-optic taper-coupled CCD detectors with a field-of-view of 25x50 mm2, a microfocus x-ray source, a rotational subject holder. For accurate localization of rotation center, coincidence between the axis of rotation and centre of image was studied by calibration with a polymethylmethacrylate cylinder. Feldkamp"s filtered back-projection cone-beam algorithm is adopted for three-dimensional reconstruction on account of the effective corn-beam angle is 5.67° of the micro-CT system. 200x1024x1024 matrix data of micro-CT is obtained with the magnification of 1.77 and pixel size of 31x31μm2. In our reconstruction software, output image size of micro-CT slices data, magnification factor and rotation sample degree can be modified in the condition of different computational efficiency and reconstruction region. The reconstructed image matrix data is processed and visualization by Visualization Toolkit (VTK). Data parallelism of VTK is performed in surface rendering of reconstructed data in order to improve computing speed. Computing time of processing a 512x512x512 matrix datasets is about 1/20 compared with serial program when 30 CPU is used. The voxel size is 54x54x108 μm3. The reconstruction and 3-D visualization images of laboratory rat ear are presented.

  16. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  17. Remote Visualization and Navigation of 3d Models of Archeological Sites

    NASA Astrophysics Data System (ADS)

    Callieri, M.; Dellepiane, M.; Scopigno, R.

    2015-02-01

    The remote visualization and navigation of 3D data directly inside the web browser is becoming a viable option, due to the recent efforts in standardizing the components for 3D rendering on the web platform. Nevertheless, handling complex models may be a challenge, especially when a more generic solution is needed to handle different cases. In particular, archeological and architectural models are usually hard to handle, since their navigation can be managed in several ways, and a completely free navigation may be misleading and not realistic. In this paper we present a solution for the remote navigation of these dataset in a WebGL component. The navigation has two possible modes: the "bird's eye" mode, where the user is able to see the model from above, and the "first person" mode, where the user can move inside the structure. The two modalities are linked by a point of interest, that helps the user to control the navigation in an intuitive fashion. Since the terrain may not be flat, and the architecture may be complex, it's necessary to handle these issues, possibly without implementing complex mesh-based collision mechanisms. Hence, a complete navigation is obtained by storing the height and collision information in an image, which provides a very simple source of data. Moreover, the same image-based approach can be used to store additional information that could enhance the navigation experience. The method has been tested in two complex test cases, showing that a simple yet powerful interaction can be obtained with limited pre-processing of data.

  18. 3D modeling of environments contaminated with chemical, biological, radiological and nuclear (CBRN) agents

    NASA Astrophysics Data System (ADS)

    Jasiobedzki, Piotr; Ng, Ho-Kong; Bondy, Michel; McDiarmid, Carl H.

    2008-04-01

    CBRN Crime Scene Modeler (C2SM) is a prototype 3D modeling system for first responders investigating environments contaminated with Chemical, Biological, Radiological and Nuclear agents. The prototype operates on board a small robotic platform or a hand-held device. The sensor suite includes stereo and high resolution cameras, a long wave infra red camera, chemical detector, and two gamma detectors (directional and non-directional). C2SM has been recently tested in field trials where it was teleoperated within an indoor environment with gamma radiation sources present. The system has successfully created multi-modal 3D models (geometry, colour, IR and gamma radiation), correctly identified location of radiation sources and provided high resolution images of these sources.

  19. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  20. Cooperative Wall-climbing Robots in 3D Environments for Surveillance and Target Tracking

    DTIC Science & Technology

    2009-02-08

    distribution of impeller vanes, volume of the chamber, and sealing effect , etc. Fig. 5 and 6 show some exemplary simulation results. In paper [11], we...Environments for Surveillance and Target Tracking 11 multiple nonholonomic mobile robots using Cartesian coordinates. Based on the special feature...gamma-ray or x-ray cargo inspection system. Three-dimensional (3D) measurements of the objects inside a cargo can be obtained by effectively

  1. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  2. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  3. The cognitive apprenticeship theory for the teaching of mathematics in an online 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Bouta, Hara; Paraskeva, Fotini

    2013-03-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective. To this end, we propose a pedagogical framework based on the cognitive apprenticeship for deriving principles and guidelines to inform the design, development and use of a 3D virtual environment. This study examines how the use of a 3D virtual world facilitates the teaching of mathematics in primary education by combining design principles and guidelines based on the Cognitive Apprenticeship Theory and the teaching methods that this theory introduces. We focus specifically on 5th and 6th grade students' engagement (behavioral, affective and cognitive) while learning fractional concepts over a period of two class sessions. Quantitative and qualitative analyses indicate considerable improvement in the engagement of the students who participated in the experiment. This paper presents the findings regarding students' cognitive engagement in the process of comprehending basic fractional concepts - notoriously hard for students to master. The findings are encouraging and suggestions are made for further research.

  4. A package for 3-D unstructured grid generation, finite-element flow solution and flow field visualization

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Pirzadeh, Shahyar; Loehner, Rainald

    1990-01-01

    A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes.

  5. Generation and visualization of four-dimensional MR angiography data using an undersampled 3-D projection trajectory.

    PubMed

    Liu, Jing; Redmond, Michael J; Brodsky, Ethan K; Alexander, Andrew L; Lu, Aiming; Thornton, Francis J; Schulte, Michael J; Grist, Thomas M; Pipe, James G; Block, Walter F

    2006-02-01

    Time-resolved contrast-enhanced magnetic resonance (MR) angiography (CE-MRA) has gained in popularity relative to X-ray Digital Subtraction Angiography because it provides three-dimensional (3-D) spatial resolution and it is less invasive. We have previously presented methods that improve temporal resolution in CE-MRA while providing high spatial resolution by employing an undersampled 3-D projection (3D PR) trajectory. The increased coverage and isotropic resolution of the 3D PR acquisition simplify visualization of the vasculature from any perspective. We present a new algorithm to develop a set of time-resolved 3-D image volumes by preferentially weighting the 3D PR data according to its acquisition time. An iterative algorithm computes a series of density compensation functions for a regridding reconstruction, one for each time frame, that exploit the variable sampling density in 3D PR. The iterative weighting procedure simplifies the calculation of appropriate density compensation for arbitrary sampling patterns, which improve sampling efficiency and, thus, signal-to-noise ratio and contrast-to-noise ratio, since it is does not require a closed-form calculation based on geometry. Current medical workstations can display these large four-dimensional studies, however, interactive cine animation of the data is only possible at significantly degraded resolution. Therefore, we also present a method for interactive visualization using powerful graphics cards and distributed processing. Results from volunteer and patient studies demonstrate the advantages of dynamic imaging with high spatial resolution.

  6. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  7. Use and Evaluation of 3D GeoWall Visualizations in Undergraduate Space Science Classes

    NASA Astrophysics Data System (ADS)

    Turner, N. E.; Hamed, K. M.; Lopez, R. E.; Mitchell, E. J.; Gray, C. L.; Corralez, D. S.; Robinson, C. A.; Soderlund, K. M.

    2005-12-01

    One persistent difficulty many astronomy students face is the lack of 3- dimensional mental model of the systems being studied, in particular the Sun-Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause, for example, the cycle of lunar phases or the pattern of seasons. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs for use in geology classes, but as yet very little work has been done involving the GeoWall for astronomy classes. We present results from a large study involving over 1000 students of varied backgrounds: some students were tested at the University of Texas at El Paso, a large public university on the US-Mexico border and other students were from the Florida Institute of Technology, a small, private, technical school in Melbourne Florida. We wrote a lecture tutorial-style lab to go along with a GeoWall 3D visual of the Earth-Moon system and tested the students before and after with several diagnostics. Students were given pre and post tests using the Lunar Phase Concept Inventory (LPCI) as well as a separate evaluation written specifically for this project. We found the lab useful for both populations of students, but not equally effective for all. We discuss reactions from the students and their improvement, as well as whether the students are able to correctly assess the usefullness of the project for their own learning.

  8. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  9. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  10. Visualization and dissemination of 3D geological property models of the Netherlands

    NASA Astrophysics Data System (ADS)

    Stafleu, Jan; Sobisch, Hans-Georg; Maljers, Denise; Hummelman, Jan; Dambrink, Roula M.; Gunnink, Jan L.

    2013-04-01

    The Geological Survey of the Netherlands (GSN) systematically produces 3D geological models of the Netherlands. To date, we build and maintain two different types of nation-wide models: (1) layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and (2) voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties. Our models are disseminated free-of-charge through the DINO-portal (www.dinoloket.nl) in a number of ways, including in an on-line map viewer with the option to create vertical cross-sections through the models, and as a series of downloadable GIS products. A recent addition to the portal is the freely downloadable SubsurfaceViewer software (developed by INSIGHT GmbH), allowing users to download and visualize both the layer-based models and the voxel models on their desktop computers. The SubsurfaceViewer allows visualization and analysis of geological layer-based and voxel models of different data structures and origin and includes a selection of data used to construct the respective model (maps, cross-sections, borehole data, etc.). The user is presented both a classical map view and an interactive 3D view. In addition, the SubsurfaceViewer offers a one dimensional vertical view as a synthetic borehole as well as a vertical cross-section view. The data structure is based on XML and linked ASCII-files and allows the hybrid usage of layers (tin and 2D raster) and voxels (3D raster). A recent development in the SubsurfaceViewer is the introduction of a data structure supporting irregular voxels. We have chosen a simple data structure consisting of a plain ASCII-file containing the x,y,z -coordinates of the lower left and upper right corner of each voxel followed by a list of property values (e.g. the geological unit the voxel belongs to, the lithological composition and the hydraulic conductivity). Irregular voxels are used to

  11. 3D visualization of sheath folds in Ancient Roman marble wall coverings from Ephesos, Turkey

    NASA Astrophysics Data System (ADS)

    Wex, Sebastian; Passchier, Cees W.; de Kemp, Eric A.; İlhan, Sinan

    2014-10-01

    Archaeological excavations and restoration of a palatial Roman housing complex in Ephesos, Turkey yielded 40 wall-decorating plates of folded mylonitic marble (Cipollino verde), derived from the internal Hellenides near Karystos, Greece. Cipollino verde was commonly used for decoration purposes in Roman buildings. The plates were serial-sectioned from a single quarried block of 1,25 m3 and provided a research opportunity for detailed reconstruction of the 3D geometry of meterscale folds in mylonitized marble. A GOCAD model is used to visualize the internal fold structures of the marble, comprising curtain folds and multilayered sheath folds. The sheath folds are unusual in that they have their intermediate axis normal to the parent layering. This agrees with regional tectonic studies, which suggest that Cipollino verde structures formed by local constrictional non-coaxial flow. Sheath fold cross-section geometry, exposed on the surface of a plate or outcrop, is found to be independent of the intersection angle of the fold structure with the studied plane. Consequently, a single surface cannot be used as an indicator of the three-dimensional geometry of transected sheath folds.

  12. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  13. Optoacoustic 3D visualization of changes in physiological properties of mouse tissues from live to postmortem

    NASA Astrophysics Data System (ADS)

    Su, Richard; Ermiliov, Sergey A.; Liopo, Anton V.; Oraevsky, Alexander A.

    2012-02-01

    Using the method of 3D optoacoustic tomography, we studied changes in tissues of the whole body of nude mice as the changes manifested themselves from live to postmortem. The studies provided the necessary baseline for optoacoustic imaging of necrotizing tissue, acute and chronic hypoxia, and reperfusion. They also establish a new optoacoustic model of early postmortem conditions of the whole mouse body. Animals were scanned in a 37°C water bath using a three-dimensional optoacoustic tomography system previously shown to provide high contrast maps of vasculature and organs based on changes in the optical absorbance. The scans were performed right before, 5 minutes after, 2 hours and 1 day after a lethal injection of KCl. The near-infrared laser wavelength of 765 nm was used to evaluate physiological features of postmortem changes. Our data showed that optoacoustic imaging is well suited for visualization of both live and postmortem tissues. The images revealed changes of optical properties in mouse organs and tissues. Specifically, we observed improvements in contrast of the vascular network and organs after the death of the animal. We associated these with reduced optical scattering, loss of motion artifacts, and blood coagulation.

  14. WaveQ3D: Fast and accurate acoustic transmission loss (TL) eigenrays, in littoral environments

    NASA Astrophysics Data System (ADS)

    Reilly, Sean M.

    This study defines a new 3D Gaussian ray bundling acoustic transmission loss model in geodetic coordinates: latitude, longitude, and altitude. This approach is designed to lower the computation burden of computing accurate environmental effects in sonar training application by eliminating the need to transform the ocean environment into a collection of Nx2D Cartesian radials. This approach also improves model accuracy by incorporating real world 3D effects, like horizontal refraction, into the model. This study starts with derivations for a 3D variant of Gaussian ray bundles in this coordinate system. To verify the accuracy of this approach, acoustic propagation predictions of transmission loss, time of arrival, and propagation direction are compared to analytic solutions and other models. To validate the model's ability to predict real world phenomena, predictions of transmission loss and propagation direction are compared to at-sea measurements, in an environment where strong horizontal refraction effect have been observed. This model has been integrated into U.S. Navy active sonar training system applications, where testing has demonstrated its ability to improve transmission loss calculation speed without sacrificing accuracy.

  15. A zero-footprint 3D visualization system utilizing mobile display technology for timely evaluation of stroke patients

    NASA Astrophysics Data System (ADS)

    Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent

    2010-03-01

    When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.

  16. Versatile, Immersive, Creative and Dynamic Virtual 3-D Healthcare Learning Environments: A Review of the Literature

    PubMed Central

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  17. Improved Visualization of Intracranial Vessels with Intraoperative Coregistration of Rotational Digital Subtraction Angiography and Intraoperative 3D Ultrasound

    PubMed Central

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative

  18. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  19. [Free hand acquisition, reconstruction and visualization of 3D and 4D ultrasound].

    PubMed

    Sakas, G; Walter, S; Grimm, M; Richtscheid, M

    2000-03-01

    3D Ultrasound will find in the next years a wide popularity under the medical imaging applications. The method expands the well-known sonography on the third dimension, therefore it becomes possible to generate spatial 3D views of internal organs. It is further possible to display static (3D) as well as dynamic organs (4D, e.g. pulsating heart). The clarity of the three-dimensional presentation supports very effectively the navigation. In this article we review the upgrading of conventional ultrasound devices on 3D and 4D capabilities, as well as the display of the datasets by corresponding visualisation and filtering approaches.

  20. Using virtual 3D audio in multispeech channel and multimedia environments

    NASA Astrophysics Data System (ADS)

    Orosz, Michael D.; Karplus, Walter J.; Balakrishnan, Jerry D.

    2000-08-01

    The advantages and disadvantages of using virtual 3-D audio in mission-critical, multimedia display interfaces were evaluated. The 3D audio platform seems to be an especially promising candidate for aircraft cockpits, flight control rooms, and other command and control environments in which operators must make mission-critical decisions while handling demanding and routine tasks. Virtual audio signal processing creates the illusion for a listener wearing conventional earphones that each of a multiplicity of simultaneous speech or audio channels is originating from a different, program- specified location in virtual space. To explore the possible uses of this new, readily available technology, a test bed simulating some of the conditions experienced by the chief flight test coordinator at NASA's Dryden Flight Research Center was designed and implemented. Thirty test subjects simultaneously performed routine tasks requiring constant hand-eye coordination, while monitoring four speech channels, each generating continuous speech signals, for the occurrence of pre-specified keywords. Performance measures included accuracy in identifying the keywords, accuracy in identifying the speaker of the keyword, and response time. We found substantial improvements on all of these measures when comparing virtual audio with conventional, monaural transmissions. We also explored the effect on operator performance of different spatial configurations of the audio sources in 3-D space, simulated movement (dither) in the source locations, and of providing graphical redundancy. Some of these manipulations were less effective and may even decrease performance efficiency, even though they improve some aspects of the virtual space simulation.

  1. 3D-modeling of Callisto's sputtered surface-exosphere environment

    NASA Astrophysics Data System (ADS)

    Lammer, Helmut; Pfleger, Martin; Lindqvist, Jesper; Lichtenegger, Herbert; Holmström, Mats; Vorburger, Audrey; Wurz, Peter; Barabash, Stas

    2016-04-01

    We study the stoichiometrical release of various surface elements caused by plasma sputtering from an assumed icy and non-icy (i.e., chondritic) surface into the exosphere of the Jovian satellite Callisto. We apply a 3D plasma planetary interaction hybrid model that is used for the evaluation of precipitation maps of magnetospheric H+, O+ and S+ sputter agents onto Callisto's surface. The obtained precipitation maps are then applied to the assumed surface compositions where the related sputter yields are calculated by means of the 2013 SRIM code and are coupled with a 3D exosphere model. Sputtered surface particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction or return to the surface. We study also the effect of collisions between sputter species and ambient O2 molecules which form a tiny atmosphere near the satellite's surface and compare the exosphere densities that are obtained from the 3D model with and without a background gaseous envelope with recent 1D model results. Finally we discuss if the Neutral gas and Ion Mass (NIM) spectrometer, that is part of the Particle Environment Package (PEP) on board of the JUICE mission will be able to detect sputtered particles from Callisto's icy and non-icy surface.

  2. Fast 3D modeling in complex environments using a single Kinect sensor

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Liu, Jingmeng

    2014-02-01

    Three-dimensional (3D) modeling technology has been widely used in inverse engineering, urban planning, robot navigation, and many other applications. How to build a dense model of the environment with limited processing resources is still a challenging topic. A fast 3D modeling algorithm that only uses a single Kinect sensor is proposed in this paper. For every color image captured by Kinect, corner feature extraction is carried out first. Then a spiral search strategy is utilized to select the region of interest (ROI) that contains enough feature corners. Next, the iterative closest point (ICP) method is applied to the points in the ROI to align consecutive data frames. Finally, the analysis of which areas can be walked through by human beings is presented. Comparative experiments with the well-known KinectFusion algorithm have been done and the results demonstrate that the accuracy of the proposed algorithm is the same as KinectFusion but the computing speed is nearly twice of KinectFusion. 3D modeling of two scenes of a public garden and traversable areas analysis in these regions further verified the feasibility of our algorithm.

  3. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  4. Autostereoscopic Displays for Visualization of Urban Environments

    DTIC Science & Technology

    2006-09-01

    Markov, S. Kupiec and A. Travis, Two approaches in the development of auto stereoscopic 3D display systems, 7th International Symposium on Display...providing collaborative viewing of real time 3D scenery will be presented and discussed. Both techniques provide multiscopic “look around” capabilities and...and imagery. Recent events have shown a severe need and demand for systems capable in a high-level 3D visualization upon homeland security posed by

  5. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  6. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  7. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  8. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  9. 3D Simulation Technology as an Effective Instructional Tool for Enhancing Spatial Visualization Skills in Apparel Design

    ERIC Educational Resources Information Center

    Park, Juyeon; Kim, Dong-Eun; Sohn, MyungHee

    2011-01-01

    The purpose of this study is to explore the effectiveness of 3D simulation technology for enhancing spatial visualization skills in apparel design education and further to suggest an innovative teaching approach using the technology. Apparel design majors in an introductory patternmaking course, at a large Midwestern University in the United…

  10. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  11. Modeling Airport Ground Operations using Discrete Event Simulation (DES) and X3D Visualization

    DTIC Science & Technology

    2008-03-01

    studies, because it offers a number of features as for example: 12 1. Open source 2. Character animation support (CAL3D) 3. Game engine with...Simulation, DES, Simkit, Diskit, Viskit, Savage, XML, Distributed Interactive Simulation, DIS, Blender , X3D Edit 16. PRICE CODE 17. SECURITY...10 5. Blender Authoring Tool

  12. Visualizing Terrestrial and Aquatic Systems in 3D - in IEEE VisWeek 2014

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  13. From digital mapping to GIS-based 3D visualization of geological maps: example from the Western Alps geological units

    NASA Astrophysics Data System (ADS)

    Balestro, Gianni; Cassulo, Roberto; Festa, Andrea; Fioraso, Gianfranco; Nicolò, Gabriele; Perotti, Luigi

    2015-04-01

    Collection of field geological data and sharing of geological maps are nowadays greatly enhanced by using digital tools and IT (Information Technology) applications. Portable hardware allows accurate GPS localization of data and homogeneous storing of information in field databases, whereas GIS (Geographic Information Systems) applications enable generalization of field data and realization of geological map databases. A further step in the digital processing of geological map information consists of building virtual visualization by means of GIS-based 3D viewers, that allow projection and draping of significant geological features over photo-realistic terrain models. Digital fieldwork activities carried out by the Authors in the Western Alps, together with building of geological map databases and related 3D visualizations, are an example of application of the above described digital technologies. Digital geological mapping was performed by means of a GIS mobile software loaded on a rugged handheld device, and lithological, structural and geomorphological features with their attributes were stored in different layers that form the field database. The latter was then generalized through usual map processing steps such as outcrops interpolation, characterization of geological boundaries and selection of meaningful punctual observations. This map databases was used for building virtual visualizations through a GIS-based 3D-viewer that loaded detailed DTM (resolution of 5 meters) and aerial images. 3D visualizations were focused on projection and draping of significant stratigraphic contacts (e.g. contacts that separate different Quaternary deposits) and tectonic contacts (i.e. exhumation-related contacts that dismembered original ophiolite sequences). In our experience digital geological mapping and related databases ensured homogeneous data storing and effective sharing of information, and allowed subsequent building of 3D GIS-based visualizations. The latters gave

  14. Towards Perceptual Interface for Visualization Navigation of Large Data Sets Using Gesture Recognition with Bezier Curves and Registered 3-D Data

    SciTech Connect

    Shin, M C; Tsap, L V; Goldgof, D B

    2003-03-20

    This paper presents a gesture recognition system for visualization navigation. Scientists are interested in developing interactive settings for exploring large data sets in an intuitive environment. The input consists of registered 3-D data. A geometric method using Bezier curves is used for the trajectory analysis and classification of gestures. The hand gesture speed is incorporated into the algorithm to enable correct recognition from trajectories with variations in hand speed. The method is robust and reliable: correct hand identification rate is 99.9% (from 1641 frames), modes of hand movements are correct 95.6% of the time, recognition rate (given the right mode) is 97.9%. An application to gesture-controlled visualization of 3D bioinformatics data is also presented.

  15. Educational Material for 3D Visualization of Spine Procedures: Methods for Creation and Dissemination.

    PubMed

    Cramer, Justin; Quigley, Edward; Hutchins, Troy; Shah, Lubdha

    2017-01-12

    Spine anatomy can be difficult to master and is essential for performing spine procedures. We sought to utilize the rapidly expanding field of 3D technology to create freely available, interactive educational materials for spine procedures. Our secondary goal was to convey lessons learned about 3D modeling and printing. This project involved two parallel processes: the creation of 3D-printed physical models and interactive digital models. We segmented illustrative CT studies of the lumbar and cervical spine to create 3D models and then printed them using a consumer 3D printer and a professional 3D printing service. We also included downloadable versions of the models in an interactive eBook and platform-independent web viewer. We then provided these educational materials to residents with a pretest and posttest to assess efficacy. The "Spine Procedures in 3D" eBook has been downloaded 71 times as of October 5, 2016. All models used in the book are available for download and printing. Regarding test results, the mean exam score improved from 70 to 86%, with the most dramatic improvement seen in the least experienced trainees. Participants reported increased confidence in performing lumbar punctures after exposure to the material. We demonstrate the value of 3D models, both digital and printed, in learning spine procedures. Moreover, 3D printing and modeling is a rapidly expanding field with a large potential role for radiologists. We have detailed our process for creating and sharing 3D educational materials in the hopes of motivating and enabling similar projects.

  16. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  17. Nonthreshold-based event detection for 3d environment monitoring in sensor networks

    SciTech Connect

    Li, M.; Liu, Y.H.; Chen, L.

    2008-12-15

    Event detection is a crucial task for wireless sensor network applications, especially environment monitoring. Existing approaches for event detection are mainly based on some predefined threshold values and, thus, are often inaccurate and incapable of capturing complex events. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds but some complex pattern in the full-scale view of the environmental data. To address this issue, we propose a nonthreshold-based approach for the real 3D sensor monitoring environment. We employ energy-efficient methods to collect a time series of data maps from the sensor network and detect complex events through matching the gathered data to spatiotemporal data patterns. Finally, we conduct trace-driven simulations to prove the efficacy and efficiency of this approach on detecting events of complex phenomena from real-life records.

  18. Three-dimensional (3D) visualization of reflow porosity and modeling of deformation in Pb-free solder joints

    SciTech Connect

    Dudek, M.A.; Hunter, L.; Kranz, S.; Williams, J.J.; Lau, S.H.; Chawla, N.

    2010-04-15

    The volume, size, and dispersion of porosity in solder joints are known to affect mechanical performance and reliability. Most of the techniques used to characterize the three-dimensional (3D) nature of these defects are destructive. With the enhancements in high resolution computed tomography (CT), the detection limits of intrinsic microstructures have been significantly improved. Furthermore, the 3D microstructure of the material can be used in finite element models to understand their effect on microscopic deformation. In this paper we describe a technique utilizing high resolution (< 1 {mu}m) X-ray tomography for the three-dimensional (3D) visualization of pores in Sn-3.9Ag-0.7Cu/Cu joints. The characteristics of reflow porosity, including volume fraction and distribution, were investigated for two reflow profiles. The size and distribution of porosity size were visualized in 3D for four different solder joints. In addition, the 3D virtual microstructure was incorporated into a finite element model to quantify the effect of voids on the lap shear behavior of a solder joint. The presence, size, and location of voids significantly increased the severity of strain localization at the solder/copper interface.

  19. Cell type-specific adaptation of cellular and nuclear volume in micro-engineered 3D environments.

    PubMed

    Greiner, Alexandra M; Klein, Franziska; Gudzenko, Tetyana; Richter, Benjamin; Striebel, Thomas; Wundari, Bayu G; Autenrieth, Tatjana J; Wegener, Martin; Franz, Clemens M; Bastmeyer, Martin

    2015-11-01

    Bio-functionalized three-dimensional (3D) structures fabricated by direct laser writing (DLW) are structurally and mechanically well-defined and ideal for systematically investigating the influence of three-dimensionality and substrate stiffness on cell behavior. Here, we show that different fibroblast-like and epithelial cell lines maintain normal proliferation rates and form functional cell-matrix contacts in DLW-fabricated 3D scaffolds of different mechanics and geometry. Furthermore, the molecular composition of cell-matrix contacts forming in these 3D micro-environments and under conventional 2D culture conditions is identical, based on the analysis of several marker proteins (paxillin, phospho-paxillin, phospho-focal adhesion kinase, vinculin, β1-integrin). However, fibroblast-like and epithelial cells differ markedly in the way they adapt their total cell and nuclear volumes in 3D environments. While fibroblast-like cell lines display significantly increased cell and nuclear volumes in 3D substrates compared to 2D substrates, epithelial cells retain similar cell and nuclear volumes in 2D and 3D environments. Despite differential cell volume regulation between fibroblasts and epithelial cells in 3D environments, the nucleus-to-cell (N/C) volume ratios remain constant for all cell types and culture conditions. Thus, changes in cell and nuclear volume during the transition from 2D to 3D environments are strongly cell type-dependent, but independent of scaffold stiffness, while cells maintain the N/C ratio regardless of culture conditions.

  20. The 3D Visualization of Slope Terrain in Sun Moon Lake.

    NASA Astrophysics Data System (ADS)

    Deng, F.; Gwo-shyn, S.; Pei-Kun, L.

    2015-12-01

    side-slope using the multi-beam sounder below the water surface. Finally, the image of the side-scan sonar is taken and merges with contour lines produced from underwater topographic DTM data. Combining those data, our purpose is by creating different 3D images to have good visualization checking the data of side-slope DTM surveys if they are in well qualified controlled.

  1. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-06-21

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team.

  2. Active Regions on the Farside of the Sun as Seen from Mars: 3D Visualization with Marie Data

    NASA Technical Reports Server (NTRS)

    Saganti, P. B.; Cueinotra, F. A.; Cleghorn, T. F.; Zeitlin, C. J.

    2004-01-01

    From March 2002, the MARIE (Martian Radiation Environment Experiment) instrument of NASA-JSC onboard the 2001 Mars Odyssey spacecraft has been providing radiation data from the Martian orbit. During the past two years, the orbit alignment of Mars-Sun-Earth provided a wealth of opportunity between 180 degrees (August 2002) and 0 degrees (October 2003). During this time, the MARIE data included the background GCR (Galactic Cosmic Rays) and several SPE (Solar Particle Events) enhanced radiation dose-rate measurements at Mars. The MARIE instrument provided a unique data set of radiation dose-rate at Mars from the active regions on the solar disk facing the Mars side. The SPE observations of October 2002 at Mars by the MARIE instrument are unique and there were no indications of these events towards the Earth at that time. Nearly 40 times increase in the quiet-time GCR dose-rate was noted from about 25 mradday to nearly 1000 mradday at Mars. Radiation dose-rate enhancement was not observed toward the Earth or in the Low Earth Orbit (LEO) during this time. Understanding the active regions on the Sun that are likely to result into SPE on the far side will also be of concern for future deep space explorations beyond LEO. We present the observations of these SPE enhanced dose rates due to the active regions from the far side of the Sun with the 3D visualization of solar disks facing Mars and Earth.

  3. GIS based 3D visualization of subsurface and surface lineaments / faults and their geological significance, northern tamil nadu, India

    NASA Astrophysics Data System (ADS)

    Saravanavel, J.; Ramasamy, S. M.

    2014-11-01

    The study area falls in the southern part of the Indian Peninsular comprising hard crystalline rocks of Archaeozoic and Proterozoic Era. In the present study, the GIS based 3D visualizations of gravity, magnetic, resistivity and topographic datasets were made and therefrom the basement lineaments, shallow subsurface lineaments and surface lineaments/faults were interpreted. These lineaments were classified as category-1 i.e. exclusively surface lineaments, category-2 i.e. surface lineaments having connectivity with shallow subsurface lineaments and category-3 i.e. surface lineaments having connectivity with shallow subsurface lineaments and basement lineaments. These three classified lineaments were analyzed in conjunction with known mineral occurrences and historical seismicity of the study area in GIS environment. The study revealed that the category-3 NNE-SSW to NE-SW lineaments have greater control over the mineral occurrences and the N-S, NNE-SSW and NE-SW, faults/lineaments control the seismicities in the study area.

  4. RADStation3G: a platform for cardiovascular image analysis integrating PACS, 3D+t visualization and grid computing.

    PubMed

    Perez, F; Huguet, J; Aguilar, R; Lara, L; Larrabide, I; Villa-Uriol, M C; López, J; Macho, J M; Rigo, A; Rosselló, J; Vera, S; Vivas, E; Fernàndez, J; Arbona, A; Frangi, A F; Herrero Jover, J; González Ballester, M A

    2013-06-01

    RADStation3G is a software platform for cardiovascular image analysis and surgery planning. It provides image visualization and management in 2D, 3D and 3D+t; data storage (images or operational results) in a PACS (using DICOM); and exploitation of patients' data such as images and pathologies. Further, it provides support for computationally expensive processes with grid technology. In this article we first introduce the platform and present a comparison with existing systems, according to the platform's modules (for cardiology, angiology, PACS archived enriched searching and grid computing), and then RADStation3G is described in detail.

  5. Method for visualization and presentation of priceless old prints based on precise 3D scan

    NASA Astrophysics Data System (ADS)

    Bunsch, Eryk; Sitnik, Robert

    2014-02-01

    Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.

  6. 3D Visualisation and Artistic Imagery to Enhance Interest in "Hidden Environments"--New Approaches to Soil Science

    ERIC Educational Resources Information Center

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-01-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke "soil atlas" was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets…

  7. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  8. 3-D visualization and quantitation of microvessels in transparent human colorectal carcinoma [corrected].

    PubMed

    Liu, Yuan-An; Pan, Shien-Tung; Hou, Yung-Chi; Shen, Ming-Yin; Peng, Shih-Jung; Tang, Shiue-Cheng; Chung, Yuan-Chiang

    2013-01-01

    Microscopic analysis of tumor vasculature plays an important role in understanding the progression and malignancy of colorectal carcinoma. However, due to the geometry of blood vessels and their connections, standard microtome-based histology is limited in providing the spatial information of the vascular network with a 3-dimensional (3-D) continuum. To facilitate 3-D tissue analysis, we prepared transparent human colorectal biopsies by optical clearing for in-depth confocal microscopy with CD34 immunohistochemistry. Full-depth colons were obtained from colectomies performed for colorectal carcinoma. Specimens were prepared away from (control) and at the tumor site. Taking advantage of the transparent specimens, we acquired anatomic information up to 200 μm in depth for qualitative and quantitative analyses of the vasculature. Examples are given to illustrate: (1) the association between the tumor microstructure and vasculature in space, including the perivascular cuffs of tumor outgrowth, and (2) the difference between the 2-D and 3-D quantitation of microvessels. We also demonstrate that the optically cleared mucosa can be retrieved after 3-D microscopy to perform the standard microtome-based histology (H&E staining and immunohistochemistry) for systematic integration of the two tissue imaging methods. Overall, we established a new tumor histological approach to integrate 3-D imaging, illustration, and quantitation of human colonic microvessels in normal and cancerous specimens. This approach has significant promise to work with the standard histology to better characterize the tumor microenvironment in colorectal carcinoma.

  9. Accurate visualization and quantification of coronary vasculature by 3D/4D fusion from biplane angiography and intravascular ultrasound

    NASA Astrophysics Data System (ADS)

    Wahle, Andreas; Mitchell, Steven C.; Olszewski, Mark E.; Long, Ryan M.; Sonka, Milan

    2001-01-01

    In the rapidly evolving field of intravascular ultrasound (IVUS) for tissue characterization and visualization, the assessment of vessel morphology still lacks a geometrically correct 3D reconstruction. The IVUS frames are usually stacked up to form a straight vessel, neglecting curvature and the axial twisting of the catheter during the pullback. This paper presents a comprehensive system for geometrically correct reconstruction of IVUS images by fusion with biplane angiography, thus combining the advantages of both modalities. Vessel cross-section and tissue characteristics are obtained form IVUS, while the 3D locations are derived by geometrical reconstruction from the angiographic projections. ECG-based timing ensures a proper match of the image data with the respective heart phase. The fusion is performed for each heart phase individually, thus yielding the 4-D data as a set of 3-D reconstructions.

  10. Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method

    NASA Astrophysics Data System (ADS)

    Kadioglu, S.

    2009-04-01

    Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method Selma KADIOGLU Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr Anatolia has always been more the point of transit, a bridge between West and East. Anatolia has been a home for ideas moving from all directions. So it is that in the Roman and post-Roman periods the role of Anatolia in general and of Ancyra (the Roman name of Ankara) in particular was of the greatest importance. Now, the visible archaeological remains of Roman period in Ankara are Roman Bath, Gymnasium, the Temple of Augustus of Rome, Street, Theatre, City Defence-Wall. The Caesar Augustus, the first Roman Emperor, conquered Asia Minor in 25 BC. Then a marble temple was built in Ancyra, the administrative capital of province, today the capital of Turkish Republic, Ankara. This monument was consecrated to the Empreror and to the Goddess Rome. This temple is supposed to have built over an earlier temple dedicated to Kybele and Men between 25 -20 BC. After the death of the Augustus in 14AD, a copy of the text of "Res Gestae Divi Augusti" was inscribed on the interior of the pronaos in Latin, whereas a Greek translation is also present on an exterior wall of the cella. In the 5th century, it was converted in to a church by the Byzantines. The aim of this study is to determine old buried archaeological remains in the Augustus temple, Roman Bath and in the governorship agora in Ulus district. These remains were imaged with transparent three dimensional (3D) visualization of the ground penetrating radar (GPR) data. Parallel two dimensional (2D) GPR profile data were acquired in the study areas, and then a 3D data volume were built using parallel 2D GPR data. A simplified amplitude-colour range and appropriate opacity function were constructed and transparent 3D image were obtained to activate buried

  11. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.

  12. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  13. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  14. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-02-15

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  15. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  16. Fusion of CTA and XA data using 3D centerline registration for plaque visualization during coronary intervention

    NASA Astrophysics Data System (ADS)

    Kaila, Gaurav; Kitslaar, Pieter; Tu, Shengxian; Penicka, Martin; Dijkstra, Jouke; Lelieveldt, Boudewijn

    2016-03-01

    Coronary Artery Disease (CAD) results in the buildup of plaque below the intima layer inside the vessel wall of the coronary arteries causing narrowing of the vessel and obstructing blood flow. Percutaneous coronary intervention (PCI) is usually done to enlarge the vessel lumen and regain back normal flow of blood to the heart. During PCI, X-ray imaging is done to assist guide wire movement through the vessels to the area of stenosis. While X-ray imaging allows for good lumen visualization, information on plaque type is unavailable. Also due to the projection nature of the X-ray imaging, additional drawbacks such as foreshortening and overlap of vessels limit the efficacy of the cardiac intervention. Reconstruction of 3D vessel geometry from biplane X-ray acquisitions helps to overcome some of these projection drawbacks. However, the plaque type information remains an issue. In contrast, imaging using computed tomography angiography (CTA) can provide us with information on both lumen and plaque type and allows us to generate a complete 3D coronary vessel tree unaffected by the foreshortening and overlap problems of the X-ray imaging. In this paper, we combine x-ray biplane images with CT angiography to visualize three plaque types (dense calcium, fibrous fatty and necrotic core) on x-ray images. 3D registration using three different registration methods is done between coronary centerlines available from x-ray images and from the CTA volume along with 3D plaque information available from CTA. We compare the different registration methods and evaluate their performance based on 3D root mean squared errors. Two methods are used to project this 3D information onto 2D plane of the x-ray biplane images. Validation of our approach is performed using artificial biplane x-ray datasets.

  17. Technical note: Reliability of Suchey-Brooks and Buckberry-Chamberlain methods on 3D visualizations from CT and laser scans.

    PubMed

    Villa, Chiara; Buckberry, Jo; Cattaneo, Cristina; Lynnerup, Niels

    2013-05-01

    Previous studies have reported that the ageing method of Suchey-Brooks (pubic bone) and some of the features applied by Lovejoy et al. and Buckberry-Chamberlain (auricular surface) can be confidently performed on 3D visualizations from CT-scans. In this study, seven observers applied the Suchey-Brooks and the Buckberry-Chamberlain methods on 3D visualizations based on CT-scans and, for the first time, on 3D visualizations from laser scans. We examined how the bone features can be evaluated on 3D visualizations and whether the different modalities (direct observations of bones, 3D visualization from CT-scan and from laser scans) are alike to different observers. We found the best inter-observer agreement for the bones versus 3D visualizations, with the highest values for the auricular surface. Between the 3D modalities, less variability was obtained for the 3D laser visualizations. Fair inter-observer agreement was obtained in the evaluation of the pubic bone in all modalities. In 3D visualizations of the auricular surfaces, transverse organization and apical changes could be evaluated, although with high inter-observer variability; micro-, macroporosity and surface texture were very difficult to score. In conclusion, these methods were developed for dry bones, where they perform best. The Suchey-Brooks method can be applied on 3D visualizations from CT or laser, but with less accuracy than on dry bone. The Buckberry-Chamberlain method should be modified before application on 3D visualizations. Future investigation should focus on a different approach and different features: 3D laser scans could be analyzed with mathematical approaches and sub-surface features should be explored on CT-scans.

  18. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  19. Application of 3D WebGIS and real-time technique in earthquake information publishing and visualization

    NASA Astrophysics Data System (ADS)

    Li, Boren; Wu, Jianping; Pan, Mao; Huang, Jing

    2015-06-01

    In hazard management, earthquake researchers have utilized GIS to ease the process of managing disasters. Researchers use WebGIS to assess hazards and seismic risk. Although they can provide a visual analysis platform based on GIS technology, they lack a general description in the extensibility of WebGIS for processing dynamic data, especially real-time data. In this paper, we propose a novel approach for real-time 3D visual earthquake information publishing model based on WebGIS and digital globe to improve the ability of processing real-time data in systems based on WebGIS. On the basis of the model, we implement a real-time 3D earthquake information publishing system—EqMap3D. The system can not only publish real-time earthquake information but also display these data and their background geoscience information in a 3D scene. It provides a powerful tool for display, analysis, and decision-making for researchers and administrators. It also facilitates better communication between researchers engaged in geosciences and the interested public.

  20. A Web platform for the interactive visualization and analysis of the 3D fractal dimension of MRI data.

    PubMed

    Jiménez, J; López, A M; Cruz, J; Esteban, F J; Navas, J; Villoslada, P; Ruiz de Miras, J

    2014-10-01

    This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer's disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud.

  1. Visualization of Mesenchymal Stromal Cells in 2Dand 3D-Cultures by Scanning Electron Microscopy with Lanthanide Contrasting.

    PubMed

    Novikov, I A; Vakhrushev, I V; Antonov, E N; Yarygin, K N; Subbot, A M

    2017-02-01

    Mesenchymal stromal cells from deciduous teeth in 2D- and 3D-cultures on culture plastic, silicate glass, porous polystyrene, and experimental polylactoglycolide matrices were visualized by scanning electron microscopy with lanthanide contrasting. Supravital staining of cell cultures with a lanthanide-based dye (neodymium chloride) preserved normal cell morphology and allowed assessment of the matrix properties of the carriers. The developed approach can be used for the development of biomaterials for tissue engineering.

  2. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  3. 3D Visualization of Radar Backscattering Diagrams Based on OpenGL

    NASA Astrophysics Data System (ADS)

    Zhulina, Yulia V.

    2004-12-01

    A digital method of calculating the radar backscattering diagrams is presented. The method uses a digital model of an arbitrary scattering object in the 3D graphics package "OpenGL" and calculates the backscattered signal in the physical optics approximation. The backscattering diagram is constructed by means of rotating the object model around the radar-target line.

  4. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  5. Visualization and mapping of neurosurgical functional brain data onto a 3-D MR-based model of the brain surface.

    PubMed

    Modayur, B R; Prothero, J; Rosse, C; Jakobovits, R; Brinkley, J F

    1996-01-01

    The Human Brain Project was initiated with the goal of developing methods for managing and sharing information about the brain. As a prototype Human Brain Project application we are developing a system for organizing, visualizing, integrating and sharing information about human language function. The goal of the brain mapping component of our work, described in this article, is to generate the 3D location and extent of cortical language sites with respect to a uniform, 3D patient coordinate system. The language sites of individual patients can then be combined with or related to other patient data in terms of a Talairach, surface-based, or other deformable coordinate systems. Language site mapping is done by visually comparing an intraoperative photograph with the rendered image (from MRI data). The techniques outlined in this article have been utilized to map cortical language sites of six patients. Preliminary results point to the adequacy of our volume visualizations for language mapping. The strength of the visualization scheme lies in the combination of interactive segmentation with volume and surface visualization. We are now in the process of acquiring more patient data to further validate the usefulness of our method.

  6. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  7. Dynamic 3D MR Visualization and Detection of Upper Airway Obstruction during Sleep using Region Growing Segmentation

    PubMed Central

    Kim, Yoon-Chul; Khoo, Michael C.K.; Davidson Ward, Sally L.; Nayak, Krishna S.

    2016-01-01

    Goal We demonstrate a novel and robust approach for visualization of upper airway dynamics and detection of obstructive events from dynamic 3D magnetic resonance imaging (MRI) scans of the pharyngeal airway. Methods This approach uses 3D region growing, where the operator selects a region of interest that includes the pharyngeal airway, places two seeds in the patent airway, and determines a threshold for the first frame. Results This approach required 5 sec/frame of CPU time compared to 10 min/frame of operator time for manual segmentation. It compared well with manual segmentation, resulting in Dice Coefficients of 0.84 to 0.94, whereas the Dice Coefficients for two manual segmentations by the same observer were 0.89 to 0.97. It was also able to automatically detect 83% of collapse events. Conclusion Use of this simple semi-automated segmentation approach improves the workflow of novel dynamic MRI studies of the pharyngeal airway and enables visualization and detection of obstructive events. Significance Obstructive sleep apnea is a significant public health issue affecting 4-9% of adults and 2% of children. Recently, 3D dynamic MRI of the upper airway has been demonstrated during natural sleep, with sufficient spatio-temporal resolution to non-invasively study patterns of airway obstruction in young adults with OSA. This work makes it practical to analyze these long scans and visualize important factors in an MRI sleep study, such as the time, site, and extent of airway collapse. PMID:26258929

  8. Effect of echo contrast media on the visualization of transverse sinus thrombosis with transcranial 3-D duplex sonography.

    PubMed

    Delcker, A; Häussermann, P; Weimar, C

    1999-09-01

    Transcranial duplex sonography has the capacity of detecting venous flow as in the transverse sinus. During a 6-month period, 28 consecutive patients (mean age 55 y) with a clinically suspected diagnosis of cerebral sinus thrombosis were included in the study. All patients were examined using 3-D ultrasound equipment within 24 h of having undergone either venous computerized tomography (CT), venous magnetic resonance imaging (MRI) or cerebral angiography. A total of 22 healthy patients had a normal venous CT, venous MRI or cerebral angiography of both transverse sinuses. Before echo contrast enhancement, the transverse sinus could be visualized in only 2 of these 44 sinuses (22 patients). A total of 6 patients with an unilaterally missed transverse sinus in 3-D ultrasound suffered from sinus thrombosis (n = 3), hypoplasia (n = 2) or aplasia (n = 1) of the unilateral transverse sinus in neuroradiological tests. In none of the patients with an thrombosis of the transverse sinus did ultrasound contrast media application improve the visualization of the affected sinus. Our study confirms that the normal transverse sinus, insonated through the contralateral temporal bone, often cannot be visualized without the use of contrast agents. With transcranial 3-D duplex sonography, a differentiation between thrombosis, hypoplasia and aplasia of the sinus was not possible.

  9. Managing Construction Operations Visually: 3-D Techniques for Complex Topography and Restricted Visibility

    ERIC Educational Resources Information Center

    Rodriguez, Walter; Opdenbosh, Augusto; Santamaria, Juan Carlos

    2006-01-01

    Visual information is vital in planning and managing construction operations, particularly, where there is complex terrain topography and salvage operations with limited accessibility and visibility. From visually-assessing site operations and preventing equipment collisions to simulating material handling activities to supervising remotes sites…

  10. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    Our method to present current state of a peat bog was focused on the possible use of a UAV-system and later Structure-from-motion algorithms as processing technique. The peat bog site is located on the Vinderel Plateau, Farcǎu Massif, Maramures Mountains (Romania). The peat bog (1530 m a.s.l., N47°54'11", E24°26'37") lies below Rugasu ridge (c. 1820 m a.s.l.) and the locality serves as a conservation area for fallen down coniferous trees. Peat deposits were formed in a landslide concavity on the western slope of Farcǎu Massif. Nowadays the site is surrounded by a completely deforested landscape, and Farcǎu Massif lies above the depressed treeline. The peat bog has an extraordinary geomorphological situation, because a gully reached the bog and drained the water. In the recent past sedimentological and dendrochronological researches have been initiated. However, an accurate 3D digital surface model also needed for a complex paleoenvironmental research. Last autumn the bog and its surroundings were finally surveyed by a multirotor UAV developed in-house based on an open-source flight management unit and its firmware. During this survey a lightweight action camera (mainly to decrease payload weight) was used to take aerial photographs. While our quadcopter is capable to fly automatically on a predefined flight route, several over- and sidelapping flight lines were generated prior to the actual survey on the ground using a control software running on a notebook. Despite those precautions, limited number of batteries and severe weather affected our final flights, resulting a reduced surveyed area around peat bog. Later, during the processing we looked for a reliable tool which powerful enough to process more than 500 photos taken during flights. After testing several software Agisoft PhotoScan was used to create 3D point cloud and mesh about bog and its environment. Due to large number of photographs PhotoScan had to be configured for network processing to get

  11. Micro3D: computer program for three-dimensional reconstruction visualization, and analysis of neuronal populations and barin regions.

    PubMed

    Bjaalie, Jan G; Leergaard, Trygve B; Pettersen, Christian

    2006-04-01

    This article presents a computer program, Micro3D, designed for 3-D reconstruction, visualization, and analysis of coordinate-data (points and lines) recorded from serial sections. The software has primarily been used for studying shapes and dimension of brain regions (contour line data) and distributions of cellular elements such as neuronal cell bodies or axonal terminal fields labeled with tract-tracing techniques (point data). The tissue elements recorded could equally well be labeled with use of other techniques, the only requirement being that the data collected are saved as x,y,z coordinates. Data are typically imported from image-combining computerized microscopy systems or image analysis systems, such as Neurolucida (MicroBrightField, Colchester, VT) or analySIS (Soft Imaging System, Gmbh, Münster, Germany). System requirements are a PC running LINUX. Reconstructions in Micro3D may be rotated and zoomed in real-time, and submitted to perspective viewing and stereo-imaging. Surfaces are re-synthesized on the basis of stacks of contour lines. Clipping is used for defining section-independent subdivisions of the reconstruction. Flattening of curved sheets of points layers (e.g., neurons in a layer) facilitates inspection of complicated distribution patterns. Micro3D computes color-coded density maps. Opportunities for translation of data from different reconstructions into common coordinate systems are also provided. This article demonstrates the use of Micro3D for visualization of complex neuronal distribution patterns in somatosensory and auditory systems. The software is available for download on conditions posted at the NeSys home pages (http://www.nesys.uio.no/) and at The Rodent Brain Workbench (http://www.rbwb.org/).

  12. 3D scanning of internal structure in gel engineering materials with visual scanning microscopic light scattering

    NASA Astrophysics Data System (ADS)

    Watanabe, Yosuke; Gong, Jing; Masato, Makino; Kabir, M. Hasnat; Furukawa, Hidemitsu

    2014-04-01

    The 3D printing technology, causing much attention from the beginning of 2013, will be possibly an alternative method to fabricate the biological soft tissues. Recently our group of Yamagata University has developed the world-first 3D Gel Printer to fabricate the complicated gel-materials with high-strength and biocompatibility. However, there are no 3D scanners that collect the data from the internal structure of complicated gel objects such as eye lens. It means that a new system for scanning the internal structure is needed now. In this study, firstly, we have tried to investigate the gel network of synthetic and biological gel with scanning microscopic light scattering (SMILS). We calculated the Young's modulus of synthetic gels with the SMILS and with the tensile test, and precisely compared the results between them. The temperature dependences of the inside structure and the transparency are observed in the pig crystalline lens. The quantitative analysis indicates the importance of the internal structure of real object. Secondary, we show the new system named Gel-scanner that can provide the 2-dimentional data of the internal structure. From examining our findings, the scanning of internal structure will enable us to expect physical properties of the real object. We convince that the gelscanner will play major role in the various fields.

  13. Augmented reality system for oral surgery using 3D auto stereoscopic visualization.

    PubMed

    Tran, Huy Hoang; Suenaga, Hideyuki; Kuwana, Kenta; Masamune, Ken; Dohi, Takeyoshi; Nakajima, Susumu; Liao, Hongen

    2011-01-01

    We present an augmented reality system for oral and maxillofacial surgery in this paper. Instead of being displayed on a separated screen, three-dimensional (3D) virtual presentations of osseous structures and soft tissues are projected onto the patient's body, providing surgeons with exact knowledge of depth information of high risk tissues inside the bone. We employ a 3D integral imaging technique which produce motion parallax in both horizontal and vertical direction over a wide viewing area in this study. In addition, surgeons are able to check the progress of the operation in real-time through an intuitive 3D based interface which is content-rich, hardware accelerated. These features prevent surgeons from penetrating into high risk areas and thus help improve the quality of the operation. Operational tasks such as hole drilling, screw fixation were performed using our system and showed an overall positional error of less than 1 mm. Feasibility of our system was also verified with a human volunteer experiment.

  14. An analysis of brightness as a factor in visual discomfort caused by watching stereoscopic 3D video

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Woo; Kang, Hang-Bong

    2015-05-01

    Even though various research has examined the factors that cause visual discomfort in watching stereoscopic 3D video, the brightness factor has not been dealt with sufficiently. In this paper, we analyze visual discomfort under various illumination conditions by considering eye-blinking rate and saccadic eye movement. In addition, we measure the perceived depth before and after watching 3D stereoscopic video by using our own 3D depth measurement instruments. Our test sequences consist of six illumination conditions for background. The illumination is changed from bright to dark or vice-versa, while the illumination of the foreground object is constant. Our test procedure is as follows: First, the subjects are rested until a baseline of no visual discomfort is established. Then, the subjects answer six questions to check their subjective pre-stimulus discomfort level. Next, we measure perceived depth for each subject, and the subjects watch 30-minute stereoscopic 3D or 2D video clips in random order. We measured eye-blinking and saccadic movements of the subject using an eye-tracking device. Then, we measured perceived depth for each subject again to detect any changes in depth perception. We also checked the subject's post-stimulus discomfort level, and measured the perceived depth after a 40-minute post-experiment resting period to measure recovery levels. After 40 minutes, most subjects returned to normal levels of depth perception. From our experiments, we found that eye-blinking rates were higher with a dark to light video progression than vice-versa. Saccadic eye movements were a lower with a dark to light video progression than viceversa.

  15. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  16. 3D visualization of deformation structures and potential fluid pathways at the Grimsel Test Site

    NASA Astrophysics Data System (ADS)

    Schneeberger, Raphael; Kober, Florian; Berger, Alfons; Spillmann, Thomas; Herwegh, Marco

    2015-04-01

    Knowledge on the ability of fluids to infiltrate subsurface rocks is of major importance for underground constructions, geothermal or radioactive waste disposal projects. In this study, we focus on the characterization of water infiltration pathways, their 3D geometries and origins. Based on surface and subsurface mapping in combination with drill core data, we developed by the use of MoveTM (Midland Valley Exploration Ltd.) a 3D structural model of the Grimsel Test Site (GTS). GTS is an underground laboratory operated by NAGRA, the Swiss organisation responsible for the management of nuclear waste. It is located within a suite of post-Variscan magmatic bodies comprising former granitic and granodioritic melts, which are dissected by mafic and aplitic dikes. During Alpine orogeny, the suite was tectonically overprinted within two stages of ductile deformation (Wehrens et al., in prep.) followed by brittle overprint of some of the shear zones during the retrograde exhumation history. It is this brittle deformation, which controls today's water infiltration network. However, the associated fractures, cataclasites and fault gouges are controlled themselves by aforementioned pre-existing mechanical discontinuities, whose origin ranges back as far as to the magmatic stage. For example, two sets of vertically oriented mafic dikes (E-W and NW-SE striking) and compositional heterogeneities induced by magmatic segregation processes in the plutonic host rocks served as nucleation sites for Alpine strain localization. Subsequently, NE-SW, E-W and NW-SE striking ductile shear zones were formed, in combination with high temperature fracturing while dissecting the host rocks in a complex 3D pattern (Wehrens et al, in prep.). Whether the ductile shear zones have been subjected to brittle reactivation and can serve as infiltration pathways or not, depends strongly on their orientations with respect to the principal stress field. Especially where deformation structures intersect

  17. A Parameterizable Framework for Replicated Experiments in Virtual 3D Environments

    NASA Astrophysics Data System (ADS)

    Biella, Daniel; Luther, Wolfram

    This paper reports on a parameterizable 3D framework that provides 3D content developers with an initial spatial starting configuration, metaphorical connectors for accessing exhibits or interactive 3D learning objects or experiments, and other optional 3D extensions, such as a multimedia room, a gallery, username identification tools and an avatar selection room. The framework is implemented in X3D and uses a Web-based content management system. It has been successfully used for an interactive virtual museum for key historical experiments and in two additional interactive e-learning implementations: an African arts museum and a virtual science centre. It can be shown that, by reusing the framework, the production costs for the latter two implementations can be significantly reduced and content designers can focus on developing educational content instead of producing cost-intensive out-of-focus 3D objects.

  18. RVA. 3-D Visualization and Analysis Software to Support Management of Oil and Gas Resources

    SciTech Connect

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne; Vanmoer, Mark; Angrave, Lawrence; Damico, James R.; Grigsby, Nathan

    2015-12-01

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including

  19. Attribute-based point cloud visualization in support of 3-D classification

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Otepka, Johannes; Kania, Adam

    2016-04-01

    Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large

  20. The role of the cytoskeleton in cellular force generation in 2D and 3D environments

    NASA Astrophysics Data System (ADS)

    Kraning-Rush, Casey M.; Carey, Shawn P.; Califano, Joseph P.; Smith, Brooke N.; Reinhart-King, Cynthia A.

    2011-02-01

    To adhere and migrate, cells generate forces through the cytoskeleton that are transmitted to the surrounding matrix. While cellular force generation has been studied on 2D substrates, less is known about cytoskeletal-mediated traction forces of cells embedded in more in vivo-like 3D matrices. Recent studies have revealed important differences between the cytoskeletal structure, adhesion, and migration of cells in 2D and 3D. Because the cytoskeleton mediates force, we sought to directly compare the role of the cytoskeleton in modulating cell force in 2D and 3D. MDA-MB-231 cells were treated with agents that perturbed actin, microtubules, or myosin, and analyzed for changes in cytoskeletal organization and force generation in both 2D and 3D. To quantify traction stresses in 2D, traction force microscopy was used; in 3D, force was assessed based on single cell-mediated collagen fibril reorganization imaged using confocal reflectance microscopy. Interestingly, even though previous studies have observed differences in cell behaviors like migration in 2D and 3D, our data indicate that forces generated on 2D substrates correlate with forces within 3D matrices. Disruption of actin, myosin or microtubules in either 2D or 3D microenvironments disrupts cell-generated force. These data suggest that despite differences in cytoskeletal organization in 2D and 3D, actin, microtubules and myosin contribute to contractility and matrix reorganization similarly in both microenvironments.

  1. Diagnostics of 3D Scaffolds by the Method of X-Ray Phase Contrast Visualization

    NASA Astrophysics Data System (ADS)

    Al'tapova, V. R.; Khlusov, I. A.; Karpov, D. A.; Chen, F.; Baumbach, T.; Pichugin, V. F.

    2014-02-01

    Polymers are one of the most interesting classes of materials for bioengineering due to their high biocompatibility and the possibility of regulating their strength and degradation. In bioengineering, the design of a polymer scaffold determines the functional possibilities of the scaffold and its possible medical applications. Traditionally, the design of polymer scaffolds is analyzed with the help of two-dimensional visualization methods, such as optical and electron microscopy, and computer tomography. However, the x-ray region of the electromagnetic spectrum is only insignificantly absorbed by polymers and soft tissue, which means that it does not support computer tomography with sufficient contrast. The present work investigates visualization with the help of an interferometer based on the Talbot effect for three-dimensional visualization of a polymer scaffold in absorption, phase, and dark-field contrasts. A comparison of images obtained by x-ray visualization with histological sections of the scaffold is made. Phase contrast has made it possible to visualize the polymer structure and growth of soft tissues in the volume of the scaffold. In the future, it will be possible to use phase contrast for three-dimensional visualization of polymer scaffolds and soft tissues in vivo as well as in vitro.

  2. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  3. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates.

  4. Localization and visualization of excess chemical potential in statistical mechanical integral equation theory 3D-HNC-RISM.

    PubMed

    Du, Qi-Shi; Liu, Peng-Jun; Huang, Ri-Bo

    2008-02-01

    In this study the excess chemical potential of the integral equation theory, 3D-RISM-HNC [Q. Du, Q. Wei, J. Phys. Chem. B 107 (2003) 13463-13470], is visualized in three-dimensional form and localized at interaction sites of solute molecule. Taking the advantage of reference interaction site model (RISM), the calculation equations of chemical excess potential are reformulized according to the solute interaction sites s in molecular space. Consequently the solvation free energy is localized at every interaction site of solute molecule. For visualization of the 3D-RISM-HNC calculation results, the excess chemical potentials are described using radial and three-dimensional diagrams. It is found that the radial diagrams of the excess chemical potentials are more sensitive to the bridge functions than the radial diagrams of solvent site density distributions. The diagrams of average excess chemical potential provide useful information of solute-solvent electrostatic and van der Waals interactions. The local description of solvation free energy at active sites of solute in 3D-RISM-HNC may broaden the application scope of statistical mechanical integral equation theory in solution chemistry and life science.

  5. Real time 3D visualization of ultrasonic data using a standard PC.

    PubMed

    Nikolov, Svetoslav Ivanov; Pablo Gómez Gonzaléz, Juan; Arendt Jensen, Jørgen

    2003-08-01

    This paper describes a flexible, software-based scan converter capable of rendering 3D volumetric data in real time on a standard PC. The display system is used in the remotely accessible and software-configurable multichannel ultrasound sampling system (RASMUS system) developed at the Center for Fast Ultrasound Imaging. The display system is split into two modules: data transfer and display. These two modules are independent and communicate using shared memory and a predefined set of functions. It is, thus, possible to use the display program with a different data-transfer module which is tailored to another source of data (scanner, database, etc.). The data-transfer module of the RASMUS system is based on a digital signal processor from Analog Devices--ADSP 21060. The beamformer is connected to a PC via the link channels of the ADSP. A direct memory access channel transfers the data from the ADSP to a memory buffer. The display module, which is based on OpenGL, uses this memory buffer as a texture map that is passed to the graphics board. The scan conversion, image interpolation, and logarithmic compression are performed by the graphics board, thus reducing the load on the main processor to a minimum. The scan conversion is done by mapping the ultrasonic data to polygons. The format of the image is determined only by the coordinates of the polygons allowing for any kind of geometry to be displayed on the screen. Data from color flow mapping is added by alpha-blending. The 3D data are displayed either as cross-sectional planes, or as a fully rendered 3D volume displayed as a pyramid. All sides of the pyramid can be changed to reveal B-mode or C-mode scans, and the pyramid can be rotated in all directions in real time.

  6. Visualization methods for high-resolution, transient, 3-D, finite element situations

    SciTech Connect

    Christon, M.A.

    1995-01-10

    Scientific visualization is the process whereby numerical data is transformed into a visual form to augment the process of discovery and understanding. Visualizing the data generated by large-scale, transient, three-dimensional finite element simulations poses many challenges due to geometric complexity, the presence of multiple materials and multiple element types, and the inherent unstructured nature of the meshes. In this paper, the direct use of finite element data structures, nodal assembly procedures, and element interpolants for volumetric adaptive surface extraction, surface rendering, vector grids and particle tracing is discussed. A brief description of a {open_quotes}direct-to-disk{close_quotes} animation system is presented, and case studies which demonstrate the use of isosurfaces, vector plots, cutting planes, reference surfaces and particle tracing are then discussed in the context of several case studies for transient incompressible viscous flow, and acoustic fluid-structure interaction simulations. An overview of the implications of massively parallel computers on visualization is presented to highlight the issues in parallel visualization methodology, algorithms. data locality and the ultimate requirements for temporary and archival data storage and network bandwidth.

  7. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  8. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  9. Comparing 2D and 3D Game-Based Learning Environments in Terms of Learning Gains and Student Perceptions

    ERIC Educational Resources Information Center

    Ak, Oguz; Kutlu, Birgul

    2017-01-01

    The aim of this study was to investigate the effectiveness of traditional, 2D and 3D game-based environments assessed by student achievement scores and to reveal student perceptions of the value of these learning environments. A total of 60 university students from the Faculty of Education who were registered in three sections of a required…

  10. Noninvasive CT to Iso-C3D registration for improved intraoperative visualization in computer assisted orthopedic surgery

    NASA Astrophysics Data System (ADS)

    Rudolph, Tobias; Ebert, Lars; Kowal, Jens

    2006-03-01

    Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C 3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm 3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C 3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C 3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.

  11. Application of Lidar Data and 3D-City Models in Visual Impact Simulations of Tall Buildings

    NASA Astrophysics Data System (ADS)

    Czynska, K.

    2015-04-01

    The paper examines possibilities and limitations of application of Lidar data and digital 3D-city models to provide specialist urban analyses of tall buildings. The location and height of tall buildings is a subject of discussions, conflicts and controversies in many cities. The most important aspect is the visual influence of tall buildings to the city landscape, significant panoramas and other strategic city views. It is an actual issue in contemporary town planning worldwide. Over 50% of high-rise buildings on Earth were built in last 15 years. Tall buildings may be a threat especially for historically developed cities - typical for Europe. Contemporary Earth observation, more and more available Lidar scanning and 3D city models are a new tool for more accurate urban analysis of the tall buildings impact. The article presents appropriate simulation techniques, general assumption of geometric and computational algorithms - available methodologies and individual methods develop by author. The goal is to develop the geometric computation methods for GIS representation of the visual impact of a selected tall building to the structure of large city. In reference to this, the article introduce a Visual Impact Size method (VIS). Presented analyses were developed by application of airborne Lidar / DSM model and more processed models (like CityGML), containing the geometry and it's semantics. Included simulations were carried out on an example of the agglomeration of Berlin.

  12. 3D similarity-dissimilarity plot for high dimensional data visualization in the context of biomedical pattern classification.

    PubMed

    Arif, Muhammad; Basalamah, Saleh

    2013-06-01

    In real life biomedical classification applications, it is difficult to visualize the feature space due to high dimensionality of the feature space. In this paper, we have proposed 3D similarity-dissimilarity plot to project the high dimensional space to a three dimensional space in which important information about the feature space can be extracted in the context of pattern classification. In this plot it is possible to visualize good data points (data points near to their own class as compared to other classes) and bad data points (data points far away from their own class) and outlier points (data points away from both their own class and other classes). Hence separation of classes can easily be visualized. Density of the data points near each other can provide some useful information about the compactness of the clusters within certain class. Moreover, an index called percentage of data points above the similarity-dissimilarity line (PAS) is proposed which is the fraction of data points above the similarity-dissimilarity line. Several synthetic and real life biomedical datasets are used to show the effectiveness of the proposed 3D similarity-dissimilarity plot.

  13. An Interactive Training Game Using 3D Sound for Visually Impaired People

    ERIC Educational Resources Information Center

    Lee, Hsiao Ping; Huang, Yen-Hsuan; Sheu, Tzu-Fang

    2013-01-01

    The number of visually impaired people is increasing year by year. Although attention has been given to the needs of people with disabilities, most of the discussion has focused on social welfare, while talk about assistive technology for people with disabilities is rare. The blind need training courses for reconstruction and rehabilitation.…

  14. Visualization of Potential Energy Function Using an Isoenergy Approach and 3D Prototyping

    ERIC Educational Resources Information Center

    Teplukhin, Alexander; Babikov, Dmitri

    2015-01-01

    In our three-dimensional world, one can plot, see, and comprehend a function of two variables at most, V(x,y). One cannot plot a function of three or more variables. For this reason, visualization of the potential energy function in its full dimensionality is impossible even for the smallest polyatomic molecules, such as triatomics. This creates…

  15. [Web-based education: learning surgical procedures step-by-step with 3D visualization].

    PubMed

    van der Velde, Susanne; Maljers, Jaap; Wiggers, Theo

    2014-01-01

    There is a need for more uniform, structured education focused on surgical procedures. We offer a standardized, step-by-step, web-based procedural training method with which surgeons can train more interns efficiently. The basis of this learning method is formed by 3D films in which surgical procedures are performed in clearly defined steps and the anatomic structures behind the surgical operating planes are further dissected. This basis is supported by online modules in which, aside from the operation, preparation and postoperative care are also addressed. Registrars can test their knowledge with exams. Trainers can see what the registrars studied, how they scored and how they progressed with their clinical skills. With the online portfolio we offer building blocks for certification and accreditation. With this clearly structured research method of constant quality, registrars are less dependent on the local trainer. In addition, through better preparation, the operation capacity can be used more efficiently for the training.

  16. Image-based robot navigation in 3D environments (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Remazeilles, Anthony; Chaumette, François; Gros, Patrick

    2005-12-01

    In this paper a new method is proposed to control a vision-based robot in large navigation spaces. In this case, visual features observed by an on-board camera can change drastically or even disappear completely between the initial image, as seen at the beginning of a task, and the final image, as seen at the desired position of the robot. These features are therefore not suffcient for controlling the entire motion of the robotic system from beginning to end. This problem requires a more complete definition and representation of the navigation space. This can be achieved by a topological representation, where the environment is directly defined in the sensor space by a data-base of images. In our approach, this data-base is acquired during an offline learning step. An image retrieval method then indexes and matches a request image, given by the camera, to the closest view within the data-base. In this way, an image path is extracted from the database to link the initial and desired images providing enough information to control the robot. The central point of this paper is focused on the closed-loop control law that drives the robot to its desired position using this image path. The method proposed does not require either a global reconstruction or a temporal planning step. Furthermore, the robot is not obliged to converge directly upon each image waypoint but chooses automatically a better trajectory. The visual servoing control law designed uses specific features which ensure that the robot navigates within the visibility path. Experimental simulations are given to show the effectiveness of this method for controlling the motion of a camera in three-dimensional environments (free-flying camera, or camera moving on a plane).

  17. Exploring the Potential of Aerial Photogrammetry for 3d Modelling of High-Alpine Environments

    NASA Astrophysics Data System (ADS)

    Legat, K.; Moe, K.; Poli, D.; Bollmannb, E.

    2016-03-01

    cameras of Microsoft's UltraCam series and the in-house processing chain centred on the Dense-Image-Matching (DIM) software SURE by nFrames. This paper reports the work carried out at AVT for the surface- and terrain modelling of several high-alpine areas using DIM- and ALS-based approaches. A special focus is dedicated to the influence of terrain morphology, flight planning, GNSS/IMU measurements, and ground-control distribution in the georeferencing process on the data quality. Based on the very promising results, some general recommendations for aerial photogrammetry processing in high-alpine areas are made to achieve best possible accuracy of the final 3D-, 2.5D- and 2D products.

  18. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  19. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    SciTech Connect

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodriguez, A. O.

    2006-09-08

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.

  20. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark; Knowles, David W.; Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2011-03-30

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.

  1. Assessment of a Static Multibeam Sonar Scanner for 3d Surveying in Confined Subaquatic Environments

    NASA Astrophysics Data System (ADS)

    Moisan, E.; Charbonnier, P.; Foucher, P.; Grussenmeyer, P.; Guillemin, S.; Samat, O.; Pagès, C.

    2016-06-01

    Mechanical Scanning Sonar (MSS) is a promising technology for surveying underwater environments. Such devices are comprised of a multibeam echosounder attached to a pan & tilt positioner, that allows sweeping the scene in a similar way as Terrestrial Laser Scanners (TLS). In this paper, we report on the experimental assessment of a recent MSS, namely, the BlueView BV5000, in a confined environment: lock number 50 on the Marne-Rhin canal (France). To this aim, we hung the system upside-down to scan the lock chamber from the surface, which allows surveying the scanning positions, up to an horizontal orientation. We propose a geometric method to estimate the remaining angle and register the scans in a coordinate system attached to the site. After reviewing the different errors that impair sonar data, we compare the resulting point cloud to a TLS model that was acquired the day before, while the lock was completely empty for maintenance. While the results exhibit a bias that can be partly explained by an imperfect setup, the maximum difference is less than 15 cm, and the standard deviation is about 3.5 cm. Visual inspection shows that coarse defects of the masonry, such as stone lacks or cavities, can be detected in the MSS point cloud, while smaller details, e.g. damaged joints, are harder to notice.

  2. Visualizing nanoscale 3D compositional fluctuation of lithium in advanced lithium-ion battery cathodes

    SciTech Connect

    Devaraj, Arun; Gu, Meng; Colby, Robert J.; Yan, Pengfei; Wang, Chong M.; Zheng, Jianming; Xiao, Jie; Genc, Arda; Zhang, Jiguang; Belharouak, Ilias; Wang, Dapeng; Amine, Khalil; Thevuthasan, Suntharampillai

    2015-08-14

    The distribution and concentration of lithium in Li-ion battery cathodes at different stages of cycling is a pivotal factor in determining battery performance. Non-uniform distribution of the transition metal cations has been shown to affect cathode performance; however, the Li is notoriously challenging to characterize with typical high-spatial-resolution imaging techniques. Here, for the first time, laser–assisted atom probe tomography is applied to two advanced Li-ion battery oxide cathode materials—layered Li1.2Ni0.2Mn0.6O2 and spinel LiNi0.5Mn1.5O4—to unambiguously map the three dimensional (3D) distribution of Li at sub-nanometer spatial resolution and correlate it with the distribution of the transition metal cations (M) and the oxygen. The as-fabricated layered Li1.2Ni0.2Mn0.6O2 is shown to have Li-rich Li2MO3 phase regions and Li-depleted Li(Ni0.5Mn0.5)O2 regions while in the cycled layered Li1.2Ni0.2Mn0.6O2 an overall loss of Li and presence of Ni rich regions, Mn rich regions and Li rich regions are shown in addition to providing the first direct evidence for Li loss on cycling of layered LNMO cathodes. The spinel LiNi0.5Mn1.5O4 cathode is shown to have a uniform distribution of all cations. These results were additionally validated by correlating with energy dispersive spectroscopy mapping of these nanoparticles in a scanning transmission electron microscope. Thus, we have opened the door for probing the nanoscale compositional fluctuations in crucial Li-ion battery cathode materials at an unprecedented spatial resolution of sub-nanometer scale in 3D which can provide critical information for understanding capacity decay mechanisms in these advanced cathode materials.

  3. Visualizing nanoscale 3D compositional fluctuation of lithium in advanced lithium-ion battery cathodes

    DOE PAGES

    Devaraj, Arun; Gu, Meng; Colby, Robert J.; ...

    2015-08-14

    The distribution and concentration of lithium in Li-ion battery cathodes at different stages of cycling is a pivotal factor in determining battery performance. Non-uniform distribution of the transition metal cations has been shown to affect cathode performance; however, the Li is notoriously challenging to characterize with typical high-spatial-resolution imaging techniques. Here, for the first time, laser–assisted atom probe tomography is applied to two advanced Li-ion battery oxide cathode materials—layered Li1.2Ni0.2Mn0.6O2 and spinel LiNi0.5Mn1.5O4—to unambiguously map the three dimensional (3D) distribution of Li at sub-nanometer spatial resolution and correlate it with the distribution of the transition metal cations (M) and themore » oxygen. The as-fabricated layered Li1.2Ni0.2Mn0.6O2 is shown to have Li-rich Li2MO3 phase regions and Li-depleted Li(Ni0.5Mn0.5)O2 regions while in the cycled layered Li1.2Ni0.2Mn0.6O2 an overall loss of Li and presence of Ni rich regions, Mn rich regions and Li rich regions are shown in addition to providing the first direct evidence for Li loss on cycling of layered LNMO cathodes. The spinel LiNi0.5Mn1.5O4 cathode is shown to have a uniform distribution of all cations. These results were additionally validated by correlating with energy dispersive spectroscopy mapping of these nanoparticles in a scanning transmission electron microscope. Thus, we have opened the door for probing the nanoscale compositional fluctuations in crucial Li-ion battery cathode materials at an unprecedented spatial resolution of sub-nanometer scale in 3D which can provide critical information for understanding capacity decay mechanisms in these advanced cathode materials.« less

  4. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  5. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  6. MGLab3D: An interactive environment for iterative solvers for elliptic PDEs in two and three dimensions

    SciTech Connect

    Bordner, J.; Saied, F.

    1996-12-31

    GLab3D is an enhancement of an interactive environment (MGLab) for experimenting with iterative solvers and multigrid algorithms. It is implemented in MATLAB. The new version has built-in 3D elliptic pde`s and several iterative methods and preconditioners that were not available in the original version. A sparse direct solver option has also been included. The multigrid solvers have also been extended to 3D. The discretization and pde domains are restricted to standard finite differences on the unit square/cube. The power of this software studies in the fact that no programming is needed to solve, for example, the convection-diffusion equation in 3D with TFQMR and a customized V-cycle preconditioner, for a variety of problem sizes and mesh Reynolds, numbers. In addition to the graphical user interface, some sample drivers are included to show how experiments can be composed using the underlying suite of problems and solvers.

  7. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  8. 3D visualization of HIV transfer at the virological synapse between dendritic cells and T cells

    PubMed Central

    Felts, Richard L.; Narayan, Kedar; Estes, Jacob D.; Shi, Dan; Trubey, Charles M.; Fu, Jing; Hartnell, Lisa M.; Ruthel, Gordon T.; Schneider, Douglas K.; Nagashima, Kunio; Bess, Julian W.; Bavari, Sina; Lowekamp, Bradley C.; Bliss, Donald; Lifson, Jeffrey D.; Subramaniam, Sriram

    2010-01-01

    The efficiency of HIV infection is greatly enhanced when the virus is delivered at conjugates between CD4+ T cells and virus-bearing antigen-presenting cells such as macrophages or dendritic cells via specialized structures known as virological synapses. Using ion abrasion SEM, electron tomography, and superresolution light microscopy, we have analyzed the spatial architecture of cell-cell contacts and distribution of HIV virions at virological synapses formed between mature dendritic cells and T cells. We demonstrate the striking envelopment of T cells by sheet-like membrane extensions derived from mature dendritic cells, resulting in a shielded region for formation of virological synapses. Within the synapse, filopodial extensions emanating from CD4+ T cells make contact with HIV virions sequestered deep within a 3D network of surface-accessible compartments in the dendritic cell. Viruses are detected at the membrane surfaces of both dendritic cells and T cells, but virions are not released passively at the synapse; instead, virus transfer requires the engagement of T-cell CD4 receptors. The relative seclusion of T cells from the extracellular milieu, the burial of the site of HIV transfer, and the receptor-dependent initiation of virion transfer by T cells highlight unique aspects of cell-cell HIV transmission. PMID:20624966

  9. Ghost particle velocimetry: accurate 3D flow visualization using standard lab equipment.

    PubMed

    Buzzaccaro, Stefano; Secchi, Eleonora; Piazza, Roberto

    2013-07-26

    We describe and test a new approach to particle velocimetry, based on imaging and cross correlating the scattering speckle pattern generated on a near-field plane by flowing tracers with a size far below the diffraction limit, which allows reconstructing the velocity pattern in microfluidic channels without perturbing the flow. As a matter of fact, adding tracers is not even strictly required, provided that the sample displays sufficiently refractive-index fluctuations. For instance, phase separation in liquid mixtures in the presence of shear is suitable to be directly investigated by this "ghost particle velocimetry" technique, which just requires a microscope with standard lamp illumination equipped with a low-cost digital camera. As a further bonus, the peculiar spatial coherence properties of the illuminating source, which displays a finite longitudinal coherence length, allows for a 3D reconstruction of the profile with a resolution of few tenths of microns and makes the technique suitable to investigate turbid samples with negligible multiple scattering effects.

  10. Single cell visualization of transcription kinetics variance of highly mobile identical genes using 3D nanoimaging

    PubMed Central

    Annibale, Paolo; Gratton, Enrico

    2015-01-01

    Multi-cell biochemical assays and single cell fluorescence measurements revealed that the elongation rate of Polymerase II (PolII) in eukaryotes varies largely across different cell types and genes. However, there is not yet a consensus whether intrinsic factors such as the position, local mobility or the engagement by an active molecular mechanism of a genetic locus could be the determinants of the observed heterogeneity. Here by employing high-speed 3D fluorescence nanoimaging techniques we resolve and track at the single cell level multiple, distinct regions of mRNA synthesis within the model system of a large transgene array. We demonstrate that these regions are active transcription sites that release mRNA molecules in the nucleoplasm. Using fluctuation spectroscopy and the phasor analysis approach we were able to extract the local PolII elongation rate at each site as a function of time. We measured a four-fold variation in the average elongation between identical copies of the same gene measured simultaneously within the same cell, demonstrating a correlation between local transcription kinetics and the movement of the transcription site. Together these observations demonstrate that local factors, such as chromatin local mobility and the microenvironment of the transcription site, are an important source of transcription kinetics variability. PMID:25788248

  11. 3D visualization of strain in abdominal aortic aneurysms based on navigated ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Brekken, Reidar; Kaspersen, Jon Harald; Tangen, Geir Arne; Dahl, Torbjørn; Hernes, Toril A. N.; Myhre, Hans Olav

    2007-03-01

    The criterion for recommending treatment of an abdominal aortic aneurysm is that the diameter exceeds 50-55 mm or shows a rapid increase. Our hypothesis is that a more accurate prediction of aneurysm rupture is obtained by estimating arterial wall strain from patient specific measurements. Measuring strain in specific parts of the aneurysm reveals differences in load or tissue properties. We have previously presented a method for in vivo estimation of circumferential strain by ultrasound. In the present work, a position sensor attached to the ultrasound probe was used for combining several 2D ultrasound sectors into a 3D model. The ultrasound was registered to a computed-tomography scan (CT), and the strain values were mapped onto a model segmented from these CT data. This gave an intuitive coupling between anatomy and strain, which may benefit both data acquisition and the interpretation of strain. In addition to potentially provide information relevant for assessing the rupture risk of the aneurysm in itself, this model could be used for validating simulations of fluid-structure interactions. Further, the measurements could be integrated with the simulations in order to increase the amount of patient specific information, thus producing a more reliable and accurate model of the biomechanics of the individual aneurysm. This approach makes it possible to extract several parameters potentially relevant for predicting rupture risk, and may therefore extend the basis for clinical decision making.

  12. Visual navigation of the UAVs on the basis of 3D natural landmarks

    NASA Astrophysics Data System (ADS)

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-12-01

    This work considers the tracking of the UAV (unmanned aviation vehicle) on the basis of onboard observations of natural landmarks including azimuth and elevation angles. It is assumed that UAV's cameras are able to capture the angular position of reference points and to measure the angles of the sight line. Such measurements involve the real position of UAV in implicit form, and therefore some of nonlinear filters such as Extended Kalman filter (EKF) or others must be used in order to implement these measurements for UAV control. Recently it was shown that modified pseudomeasurement method may be used to control UAV on the basis of the observation of reference points assigned along the UAV path in advance. However, the use of such set of points needs the cumbersome recognition procedure with the huge volume of on-board memory. The natural landmarks serving as such reference points which may be determined on-line can significantly reduce the on-board memory and the computational difficulties. The principal difference of this work is the usage of the 3D reference points coordinates which permits to determine the position of the UAV more precisely and thereby to guide along the path with higher accuracy which is extremely important for successful performance of the autonomous missions. The article suggests the new RANSAC for ISOMETRY algorithm and the use of recently developed estimation and control algorithms for tracking of given reference path under external perturbation and noised angular measurements.

  13. Effect of space balance 3D training using visual feedback on balance and mobility in acute stroke patients

    PubMed Central

    Ko, YoungJun; Ha, HyunGeun; Bae, Young-Hyeon; Lee, WanHee

    2015-01-01

    [Purpose] The purpose of the study was to determine the effects of balance training with Space Balance 3D, which is a computerized measurement and visual feedback balance assessment system, on balance and mobility in acute stroke patients. [Subjects and Methods] This was a randomized controlled trial in which 52 subjects were assigned randomly into either an experimental group or a control group. The experimental group, which contained 26 subjects, received balance training with a Space Balance 3D exercise program and conventional physical therapy interventions 5 times per week during 3 weeks. Outcome measures were examined before and after the 3-week interventions using the Berg Balance Scale (BBS), Timed Up and Go (TUG) test, and Postural Assessment Scale for Stroke Patients (PASS). The data were analyzed by a two-way repeated measures ANOVA using SPSS 19.0. [Results] The results revealed a nonsignificant interaction effect between group and time period for both groups before and after the interventions in the BBS score, TUG score, and PASS score. In addition, the experimental group showed more improvement than the control group in the BBS, TUG and PASS scores, but the differences were not significant. In the comparisons within the groups by time, both groups showed significant improvement in BBS, TUG, and PASS scores. [Conclusion] The Space Balance 3D training with conventional physical therapy intervention is recommended for improvement of balance and mobility in acute stroke patients. PMID:26157270

  14. A visual data-mining approach using 3D thoracic CT images for classification between benign and malignant pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiki; Niki, Noboru; Ohamatsu, Hironobu; Kusumoto, Masahiko; Kakinuma, Ryutaro; Mori, Kiyoshi; Yamada, K.; Nishiyama, Hiroyuki; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki

    2003-05-01

    This paper presents a visual data-mining approach to assist physicians for classification between benign and malignant pulmonary nodules. This approach retrieves and displays nodules which exhibit morphological and internal profiles consistent to the nodule in question. It uses a three-dimensional (3-D) CT image database of pulmonary nodules for which diagnosis is known. The central module in this approach makes possible analysis of the query nodule image and extraction of the features of interest: shape, surrounding structure, and internal structure of the nodules. The nodule shape is characterized by principal axes, while the surrounding and internal structure is represented by the distribution pattern of CT density and 3-D curvature indexes. The nodule representation is then applied to a similarity measure such as a correlation coefficient. For each query case, we sort all the nodules of the database from most to less similar ones. By applying the retrieval method to our database, we present its feasibility to search the similar 3-D nodule images.

  15. Effects of Na+ and He+ pickup ions on the lunar plasma environment: 3D hybrid modeling

    NASA Astrophysics Data System (ADS)

    Lipatov, A. S.; Cooper, J. F.; Sittler, E. C.; Hartle, R. E.; Sarantos, M.

    2011-12-01

    The hybrid kinetic model used here supports comprehensive simulation of the interaction between different spatial and energetic elements of the moon-solar wind-magnetosphere of the Earth system. There is a set of MHD,kinetic, hybrid, drift kinetic, electrostatic and full kinetic modeling of the lunar plasma environment [1]. However, observations show the existence of several species of the neutrals and pickup ions like Na, He, K, O etc., (see e.g., [2,3,4]). The solar wind parameters are chosen from the ARTEMIS observations [5]. The Na+, He+ lunar exosphere's parameters are chosen from [6,7]. The hybrid kinetic model allows us to take into account the finite gyroradius effects of pickup ions and to correctly estimate the ions velocity distribution and the fluxes along the magnetic field, and on the lunar surface. Modeling shows the formation of the asymmetric Mach cone, the structuring of the pickup ion tails, and presents another type of lunar-solar wind interaction. We will compare the results of our modeling with observed distributions. References [1] Lipatov, A.S., and Cooper, J.F., Hybrid kinetic modeling of the Lunar plasma environment: Past, present and future. In: Lunar Dust, Plasma and Atmosphere: The Next Steps, January 27-29, 2010, Boulder, Colorado, Abstracts/lpa2010.colorado.edu/. [2] Potter, A.E., and Morgan, T.H., Discovery of sodium and potassium vapor in the atmosphere of the Moon, Science, 241, 675-680, doi:10.1126/science.241.4866.675, 1988. [3] Tyler, A.L., et al., Observations of sodium in the tenuous lunar atmosphere, Geophys. Res. Lett., 15(10), 1141-1144, doi:10.1029/GL015i010p01141, 1988. [4] Tanaka, T., et al., First in situ observation of the Moon-originating ions in the Earth's Magnetosphere by MAP-PACE on SELENE (KAGUYA), Geophys. Res. Lett., 36, L22106, doi:10.1029/2009GL040682, 2009. [5] Wiehle, S., et al., First Lunar Wake Passage of ARTEMIS: Discrimination of Wake Effects and Solar Wind Fluctuations by 3D Hybrid Simulations, Planet

  16. Predicate-Based Focus-and-Context Visualization for 3D Ultrasound.

    PubMed

    Schulte zu Berge, Christian; Baust, Maximilian; Kapoor, Ankur; Navab, Nassir

    2014-12-01

    Direct volume visualization techniques offer powerful insight into volumetric medical images and are part of the clinical routine for many applications. Up to now, however, their use is mostly limited to tomographic imaging modalities such as CT or MRI. With very few exceptions, such as fetal ultrasound, classic volume rendering using one-dimensional intensity-based transfer functions fails to yield satisfying results in case of ultrasound volumes. This is particularly due its gradient-like nature, a high amount of noise and speckle, and the fact that individual tissue types are rather characterized by a similar texture than by similar intensity values. Therefore, clinicians still prefer to look at 2D slices extracted from the ultrasound volume. In this work, we present an entirely novel approach to the classification and compositing stage of the volume rendering pipeline, specifically designed for use with ultrasonic images. We introduce point predicates as a generic formulation for integrating the evaluation of not only low-level information like local intensity or gradient, but also of high-level information, such as non-local image features or even anatomical models. Thus, we can successfully filter clinically relevant from non-relevant information. In order to effectively reduce the potentially high dimensionality of the predicate configuration space, we propose the predicate histogram as an intuitive user interface. This is augmented by a scribble technique to provide a comfortable metaphor for selecting predicates of interest. Assigning importance factors to the predicates allows for focus-and-context visualization that ensures to always show important (focus) regions of the data while maintaining as much context information as possible. Our method naturally integrates into standard ray casting algorithms and yields superior results in comparison to traditional methods in terms of visualizing a specific target anatomy in ultrasound volumes.

  17. Visual landmarks facilitate rodent spatial navigation in virtual reality environments

    PubMed Central

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484

  18. Representation and visualization of variability in a 3D anatomical atlas using the kidney as an example

    NASA Astrophysics Data System (ADS)

    Hacker, Silke; Handels, Heinz

    2006-03-01

    Computer-based 3D atlases allow an interactive exploration of the human body. However, in most cases such 3D atlases are derived from one single individual, and therefore do not regard the variability of anatomical structures concerning their shape and size. Since the geometric variability across humans plays an important role in many medical applications, our goal is to develop a framework of an anatomical atlas for representation and visualization of the variability of selected anatomical structures. The basis of the project presented is the VOXEL-MAN atlas of inner organs that was created from the Visible Human data set. For modeling anatomical shapes and their variability we utilize "m-reps" which allow a compact representation of anatomical objects on the basis of their skeletons. As an example we used a statistical model of the kidney that is based on 48 different variants. With the integration of a shape description into the VOXEL-MAN atlas it is now possible to query and visualize different shape variations of an organ, e.g. by specifying a person's age or gender. In addition to the representation of individual shape variants, the average shape of a population can be displayed. Besides a surface representation, a volume-based representation of the kidney's shape variants is also possible. It results from the deformation of the reference kidney of the volume-based model using the m-rep shape description. In this way a realistic visualization of the shape variants becomes possible, as well as the visualization of the organ's internal structures.

  19. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    SciTech Connect

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.; Kuprat, Andrew P.; Kleese van Dam, Kerstin; Carson, James P.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  20. Polyphase basin evolution of the Vienna Basin inferred from 3D visualization of sedimentation setting and quantitative subsidence

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-04-01

    This study analyzed and visualized data from 210 wells using a MATLAB-based program (BasinVis 1.0) for 3D visualization of sediment distribution, thickness, and quantitative subsidence of the northern and central Vienna Basin. The sedimentation settings for selected horizons were visualized to 3D sediment distribution maps, isopach maps, and cross-sections. Subsidence of the study area resulted in 3D subsidence depth and rate maps of basement and tectonic subsidences. Due to the special position of the Vienna Basin, the basin evolution was influenced by the regional tectonics of surrounding units. The 2D/3D maps provided insights into the polyphase evolution of the Vienna Basin, which is closely related to changes in the changing regional stress field and the paleoenvironmental setting. In the Early Miocene, the sedimentation and subsidence were shallow and E-W/NE-SW trending, indicating the development of piggy-back basins. During the late Early Miocene, maps show wider sedimentation and abruptly increasing subsidence by sinistral strike-slip faults, which initiated the Vienna pull-apart basin system. The sediments of the Early Miocene were supplied through a small deltaic system entering from the south. After thin sedimentation and shallow subsidence of the early Middle Miocene, the development of the Vienna Basin was controlled and accelerated mainly by NE-SW trending synsedimentary normal faults, especially the Steinberg fault. From the Middle Miocene, the subsidence was decreasing overall, however the tectonic subsidence show regionally different patterns. This study suggests that a major tensional regime change, from transtension to E-W extension, caused laterally varying subsidence across the Vienna Basin. The Late Miocene was characterized by the slowing down of basement and tectonic subsidence. From the middle Middle to Late Miocene, enormous amount of sediments supplied by a broad paleo-Danube delta complex on the western flank of the basin. The latest

  1. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  2. Registration and real-time visualization of transcranial magnetic stimulation with 3-D MR images.

    PubMed

    Noirhomme, Quentin; Ferrant, Matthieu; Vande